Test Report: Docker_Linux 12425

                    
                      d52130b292d08b0a6095e884aa0df76b8e13fcee:2021-09-15:20469
                    
                

Test fail (2/282)

Order failed test Duration
35 TestAddons/parallel/CSI 383.27
324 TestNetworkPlugins/group/cilium/DNS 336.43
x
+
TestAddons/parallel/CSI (383.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:484: csi-hostpath-driver pods stabilized in 5.694251ms
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:492: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915012342-6768 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915012342-6768 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:497: (dbg) Run:  kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:502: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [18bf7079-ae54-4543-b0e1-f228daae1947] Pending
helpers_test.go:343: "task-pv-pod" [18bf7079-ae54-4543-b0e1-f228daae1947] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [18bf7079-ae54-4543-b0e1-f228daae1947] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:502: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.08344284s
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:512: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210915012342-6768 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210915012342-6768 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:517: (dbg) Run:  kubectl --context addons-20210915012342-6768 delete pod task-pv-pod
addons_test.go:517: (dbg) Done: kubectl --context addons-20210915012342-6768 delete pod task-pv-pod: (1.217428995s)
addons_test.go:523: (dbg) Run:  kubectl --context addons-20210915012342-6768 delete pvc hpvc
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915012342-6768 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [016da038-7e39-46c3-9e82-2ac44a0118dd] Pending
helpers_test.go:343: "task-pv-pod-restore" [016da038-7e39-46c3-9e82-2ac44a0118dd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:544: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210915012342-6768 -n addons-20210915012342-6768
addons_test.go:544: TestAddons/parallel/CSI: showing logs for failed pods as of 2021-09-15 01:33:18.373959752 +0000 UTC m=+602.537135378
addons_test.go:544: (dbg) Run:  kubectl --context addons-20210915012342-6768 describe po task-pv-pod-restore -n default
addons_test.go:544: (dbg) kubectl --context addons-20210915012342-6768 describe po task-pv-pod-restore -n default:
Name:         task-pv-pod-restore
Namespace:    default
Priority:     0
Node:         addons-20210915012342-6768/192.168.49.2
Start Time:   Wed, 15 Sep 2021 01:27:17 +0000
Labels:       app=task-pv-pod-restore
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
task-pv-container:
Container ID:   
Image:          nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      k8s-minikube
GCP_PROJECT:                     k8s-minikube
GCLOUD_PROJECT:                  k8s-minikube
GOOGLE_CLOUD_PROJECT:            k8s-minikube
CLOUDSDK_CORE_PROJECT:           k8s-minikube
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rw7gr (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc-restore
ReadOnly:   false
kube-api-access-rw7gr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                  Age                   From                     Message
----     ------                  ----                  ----                     -------
Normal   Scheduled               6m1s                  default-scheduler        Successfully assigned default/task-pv-pod-restore to addons-20210915012342-6768
Normal   SuccessfulAttachVolume  6m                    attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03"
Warning  FailedMount             111s (x10 over 6m)    kubelet                  MountVolume.SetUp failed for volume "gcp-creds" : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
Warning  FailedMount             102s (x2 over 3m58s)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-rw7gr gcp-creds task-pv-storage]: timed out waiting for the condition
addons_test.go:544: (dbg) Run:  kubectl --context addons-20210915012342-6768 logs task-pv-pod-restore -n default
addons_test.go:544: (dbg) Non-zero exit: kubectl --context addons-20210915012342-6768 logs task-pv-pod-restore -n default: exit status 1 (72.70227ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
addons_test.go:544: kubectl --context addons-20210915012342-6768 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:545: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210915012342-6768
helpers_test.go:236: (dbg) docker inspect addons-20210915012342-6768:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247",
	        "Created": "2021-09-15T01:24:09.767104389Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9017,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T01:24:10.333554125Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247/hosts",
	        "LogPath": "/var/lib/docker/containers/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247-json.log",
	        "Name": "/addons-20210915012342-6768",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-20210915012342-6768:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210915012342-6768",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/51d9b234f14d1df94073fe2d04270bd2b618bcd7ba2653ce81c199e50032ed32-init/diff:/var/lib/docker/overlay2/c09202ba721929c0f83cfbe05a9a7edd19aceee2a2070d87f35e3eeb64726707/diff:/var/lib/docker/overlay2/e1fdd3ffc1180deb8a0e09cdea2ee8630bda0b622e281199b9b064a7a00561b8/diff:/var/lib/docker/overlay2/7fda6a5848e725fa138c142a377bb78189270dcb404c5d8d80760b6c0e4db5e5/diff:/var/lib/docker/overlay2/b125a57bbad6df390220e5b99a6efb9072c788968bce332e3182af0f92c47abf/diff:/var/lib/docker/overlay2/921ad870272b4b957705c1f8bc4d7ddb53f0315a22385aca7fe0226b36a0ca3c/diff:/var/lib/docker/overlay2/9227d4f119c8b653bf356ed31719e424fb15e6a421ac6d4f0d0c308d989d7fef/diff:/var/lib/docker/overlay2/d9fc973dfee4c8a9e042e1f0eb12ea209ea2274c6dde45e1b94cd507963c5bd5/diff:/var/lib/docker/overlay2/b61e09bd505af8ac96dc90e954ad045e8a9db6fecbd2b842d773d2732d6d9014/diff:/var/lib/docker/overlay2/ec01f649f59978eaae4f1d684cc0e41735bbb7d96a113159392dc8ca2af6f426/diff:/var/lib/docker/overlay2/9da7a2
4e6630bb7d8a35666f96dedeb46e677ac2ad87cd899812701e6a005cf3/diff:/var/lib/docker/overlay2/989e766078c3e8e94edc7f73233905d9ffa58606437ee657327cb81f7f0f84db/diff:/var/lib/docker/overlay2/138969e8d939ffe3390978e423bccef489b8e8043057f844f31ee8b576a448d6/diff:/var/lib/docker/overlay2/32070a9e35fe4dc3a93e90df77d68afde9d83d5e80e09b2d4bec6e9d69b3d916/diff:/var/lib/docker/overlay2/e98ae99d45a0b41ee2f27400daa7bbc81fb9d5b4997074d44839aa4b12f7bfa6/diff:/var/lib/docker/overlay2/8762d166d07ef547b66a7d8435c811a6a5c29371f0d3329eb7225355478d15e1/diff:/var/lib/docker/overlay2/06bb8873c66cd9c23f1e5dddfff72086bd7fb96a709c7828ca394021d7aa9f16/diff:/var/lib/docker/overlay2/bb88812041d10b5820592db379c1d5e010fd5f45726435935cf954c476a1b415/diff:/var/lib/docker/overlay2/1ed176dd388f5b30436eb399c22cd1ba158ceaf858bdb7287b2fbfc8d2e5bf14/diff:/var/lib/docker/overlay2/841c9fd7a64d2fabdc958fc73fe929f282447acf0c1b7236a82e465e71322cdb/diff:/var/lib/docker/overlay2/67e8ae2ce9ede87c152d76e197e8f97a780a6d877e9bae47bcbe9397f27bb009/diff:/var/lib/d
ocker/overlay2/38741c59600445f92d98b126f954d22cc91f0f17a9ee8f520ef7043ad6ae65b2/diff:/var/lib/docker/overlay2/11aa586cc62584d1ad51d305e8e0ab4ac6e0d4c59a6dcb9ef75d3383010b3123/diff:/var/lib/docker/overlay2/5d8f6d21e77b74bddfce3305f95a4b3f675f95d4f83ea6fd4c62d5990431d396/diff:/var/lib/docker/overlay2/89ecf90e7e64abad9349517382dcbf066d4e8405c1a506a4b891b486153023d4/diff:/var/lib/docker/overlay2/03343b56866387dcc649efb11cc50e123f141cc94713e6bd0c2c9bbc3434d33e/diff:/var/lib/docker/overlay2/3f91e9d35fbcde7722183a441bf8c99781b7b5a513faa4c1bb8558a4032d16f4/diff:/var/lib/docker/overlay2/840c99850a911f467995dad0b78247f9fad9f7129aefdfba282cec2ac545ae36/diff:/var/lib/docker/overlay2/bce9487b05b417af0ed326e59728f044c0cb9197f27450f37c06ce2d86299f82/diff:/var/lib/docker/overlay2/a03daf7ac351e27eeb3415580fa8e6712145052964da904a40687062073b9cb7/diff:/var/lib/docker/overlay2/d8c4f7ef1395988a5900bc0f4888bafe68cf81bb8b66253d25ef2d23f4c14faf/diff:/var/lib/docker/overlay2/97c6dad15bffcf7946e0f7affbbcbecd6d71eedfbec326858897c30494c
eced3/diff:/var/lib/docker/overlay2/f814b86959f1a92bff34a7366c025dc4e4059eafb72a6d03d7ae44f2372942b1/diff:/var/lib/docker/overlay2/5fcfeb62c4286d549aa184405e89f5a2a73c30bdd969089642958abcd3d1878b/diff:/var/lib/docker/overlay2/f470952a996d35d6c6e34072852573a7656669d3791436814686a3fc712d9315/diff:/var/lib/docker/overlay2/5f594f121798b617f8190d75f72a91913ed67054c5262fce467bc025910fe6c1/diff:/var/lib/docker/overlay2/fd74cf7beb49d3fece3b06f4752e25c3581ba5069e9852d31ae298b24a6bbe1c/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51d9b234f14d1df94073fe2d04270bd2b618bcd7ba2653ce81c199e50032ed32/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51d9b234f14d1df94073fe2d04270bd2b618bcd7ba2653ce81c199e50032ed32/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51d9b234f14d1df94073fe2d04270bd2b618bcd7ba2653ce81c199e50032ed32/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210915012342-6768",
	                "Source": "/var/lib/docker/volumes/addons-20210915012342-6768/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210915012342-6768",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210915012342-6768",
	                "name.minikube.sigs.k8s.io": "addons-20210915012342-6768",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3cf8a1ef7e2ad7d5fb48ed1fd15191f2c3c9ba6b683146e26edd7d91e240043e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3cf8a1ef7e2a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210915012342-6768": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f6a7f6938239"
	                    ],
	                    "NetworkID": "7af3f8389e1586aaac65a3567d1879209c123d113523e5fa9d723966e614a202",
	                    "EndpointID": "c36314264a398a5fcc69901176c13072d6aea588b6d352f6a4f700dde4d74e16",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210915012342-6768 -n addons-20210915012342-6768
helpers_test.go:245: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 logs -n 25
helpers_test.go:253: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                Args                 |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                               | download-only-20210915012315-6768   | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:38 UTC | Wed, 15 Sep 2021 01:23:38 UTC |
	| delete  | -p                                  | download-only-20210915012315-6768   | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:38 UTC | Wed, 15 Sep 2021 01:23:38 UTC |
	|         | download-only-20210915012315-6768   |                                     |         |         |                               |                               |
	| delete  | -p                                  | download-only-20210915012315-6768   | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:38 UTC | Wed, 15 Sep 2021 01:23:39 UTC |
	|         | download-only-20210915012315-6768   |                                     |         |         |                               |                               |
	| delete  | -p                                  | download-docker-20210915012339-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:42 UTC | Wed, 15 Sep 2021 01:23:42 UTC |
	|         | download-docker-20210915012339-6768 |                                     |         |         |                               |                               |
	| start   | -p addons-20210915012342-6768       | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:43 UTC | Wed, 15 Sep 2021 01:26:03 UTC |
	|         | --wait=true --memory=4000           |                                     |         |         |                               |                               |
	|         | --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --addons=registry                   |                                     |         |         |                               |                               |
	|         | --addons=metrics-server             |                                     |         |         |                               |                               |
	|         | --addons=olm                        |                                     |         |         |                               |                               |
	|         | --addons=volumesnapshots            |                                     |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver        |                                     |         |         |                               |                               |
	|         | --driver=docker                     |                                     |         |         |                               |                               |
	|         | --container-runtime=docker          |                                     |         |         |                               |                               |
	|         | --addons=ingress                    |                                     |         |         |                               |                               |
	|         | --addons=helm-tiller                |                                     |         |         |                               |                               |
	| -p      | addons-20210915012342-6768          | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:03 UTC | Wed, 15 Sep 2021 01:26:17 UTC |
	|         | addons enable gcp-auth              |                                     |         |         |                               |                               |
	| -p      | addons-20210915012342-6768          | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:17 UTC | Wed, 15 Sep 2021 01:26:27 UTC |
	|         | addons enable gcp-auth --force      |                                     |         |         |                               |                               |
	| -p      | addons-20210915012342-6768          | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:32 UTC | Wed, 15 Sep 2021 01:26:33 UTC |
	|         | addons disable metrics-server       |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210915012342-6768 ip       | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:54 UTC | Wed, 15 Sep 2021 01:26:54 UTC |
	| -p      | addons-20210915012342-6768          | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:54 UTC | Wed, 15 Sep 2021 01:26:55 UTC |
	|         | addons disable registry             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210915012342-6768          | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:56 UTC | Wed, 15 Sep 2021 01:26:57 UTC |
	|         | addons disable helm-tiller          |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210915012342-6768          | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:27:03 UTC | Wed, 15 Sep 2021 01:27:03 UTC |
	|         | addons disable gcp-auth             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210915012342-6768 ssh      | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:27:05 UTC | Wed, 15 Sep 2021 01:27:06 UTC |
	|         | curl -s http://127.0.0.1/ -H        |                                     |         |         |                               |                               |
	|         | 'Host: nginx.example.com'           |                                     |         |         |                               |                               |
	| -p      | addons-20210915012342-6768          | addons-20210915012342-6768          | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:27:06 UTC | Wed, 15 Sep 2021 01:27:34 UTC |
	|         | addons disable ingress              |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 01:23:43
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.17 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 01:23:43.042897    7717 out.go:298] Setting OutFile to fd 1 ...
	I0915 01:23:43.042969    7717 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:23:43.042974    7717 out.go:311] Setting ErrFile to fd 2...
	I0915 01:23:43.042980    7717 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:23:43.043100    7717 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/bin
	I0915 01:23:43.043369    7717 out.go:305] Setting JSON to false
	I0915 01:23:43.076298    7717 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":386,"bootTime":1631668637,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0915 01:23:43.076406    7717 start.go:121] virtualization: kvm guest
	I0915 01:23:43.078554    7717 out.go:177] * [addons-20210915012342-6768] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
	I0915 01:23:43.078694    7717 notify.go:169] Checking for updates...
	I0915 01:23:43.080022    7717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	I0915 01:23:43.081416    7717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 01:23:43.082672    7717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube
	I0915 01:23:43.083817    7717 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 01:23:43.083997    7717 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 01:23:43.127173    7717 docker.go:132] docker version: linux-19.03.15
	I0915 01:23:43.127258    7717 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:23:43.202779    7717 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:182 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-09-15 01:23:43.15849678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0915 01:23:43.202893    7717 docker.go:237] overlay module found
	I0915 01:23:43.204689    7717 out.go:177] * Using the docker driver based on user configuration
	I0915 01:23:43.204708    7717 start.go:278] selected driver: docker
	I0915 01:23:43.204714    7717 start.go:751] validating driver "docker" against <nil>
	I0915 01:23:43.204733    7717 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0915 01:23:43.204774    7717 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0915 01:23:43.204794    7717 out.go:242] ! Your cgroup does not allow setting memory.
	I0915 01:23:43.206184    7717 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0915 01:23:43.206923    7717 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:23:43.276661    7717 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:182 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-09-15 01:23:43.238196574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0915 01:23:43.276741    7717 start_flags.go:264] no existing cluster config was found, will generate one from the flags 
	I0915 01:23:43.276873    7717 start_flags.go:737] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 01:23:43.276893    7717 cni.go:93] Creating CNI manager for ""
	I0915 01:23:43.276899    7717 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 01:23:43.276905    7717 start_flags.go:278] config:
	{Name:addons-20210915012342-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:addons-20210915012342-6768 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:23:43.278754    7717 out.go:177] * Starting control plane node addons-20210915012342-6768 in cluster addons-20210915012342-6768
	I0915 01:23:43.278774    7717 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 01:23:43.280394    7717 out.go:177] * Pulling base image ...
	I0915 01:23:43.280429    7717 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 01:23:43.280460    7717 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 01:23:43.280472    7717 cache.go:57] Caching tarball of preloaded images
	I0915 01:23:43.280531    7717 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 01:23:43.280616    7717 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 01:23:43.280636    7717 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.1 on docker
	I0915 01:23:43.280911    7717 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/config.json ...
	I0915 01:23:43.280943    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/config.json: {Name:mk3c1835448dfec8bc8961af8314578e08ae9a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:23:43.361279    7717 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 01:23:43.361425    7717 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
	I0915 01:23:43.361442    7717 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory, skipping pull
	I0915 01:23:43.361447    7717 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in cache, skipping pull
	I0915 01:23:43.361462    7717 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 as a tarball
	I0915 01:23:43.361471    7717 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 from local cache
	I0915 01:24:06.834928    7717 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 from cached tarball
	I0915 01:24:06.834965    7717 cache.go:206] Successfully downloaded all kic artifacts
	I0915 01:24:06.835002    7717 start.go:313] acquiring machines lock for addons-20210915012342-6768: {Name:mkc7ac9c365edb65286f5fa8828239238f7b72b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 01:24:06.835113    7717 start.go:317] acquired machines lock for "addons-20210915012342-6768" in 87.695µs
	I0915 01:24:06.835136    7717 start.go:89] Provisioning new machine with config: &{Name:addons-20210915012342-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:addons-20210915012342-6768 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
	I0915 01:24:06.835200    7717 start.go:126] createHost starting for "" (driver="docker")
	I0915 01:24:06.837459    7717 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 01:24:06.837672    7717 start.go:160] libmachine.API.Create for "addons-20210915012342-6768" (driver="docker")
	I0915 01:24:06.837701    7717 client.go:168] LocalClient.Create starting
	I0915 01:24:06.837814    7717 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem
	I0915 01:24:07.123885    7717 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/cert.pem
	I0915 01:24:07.214001    7717 cli_runner.go:115] Run: docker network inspect addons-20210915012342-6768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 01:24:07.249160    7717 cli_runner.go:162] docker network inspect addons-20210915012342-6768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 01:24:07.249250    7717 network_create.go:255] running [docker network inspect addons-20210915012342-6768] to gather additional debugging logs...
	I0915 01:24:07.249273    7717 cli_runner.go:115] Run: docker network inspect addons-20210915012342-6768
	W0915 01:24:07.282180    7717 cli_runner.go:162] docker network inspect addons-20210915012342-6768 returned with exit code 1
	I0915 01:24:07.282206    7717 network_create.go:258] error running [docker network inspect addons-20210915012342-6768]: docker network inspect addons-20210915012342-6768: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210915012342-6768
	I0915 01:24:07.282218    7717 network_create.go:260] output of [docker network inspect addons-20210915012342-6768]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210915012342-6768
	
	** /stderr **
	I0915 01:24:07.282260    7717 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 01:24:07.315859    7717 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007183c0] misses:0}
	I0915 01:24:07.315897    7717 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 01:24:07.315912    7717 network_create.go:106] attempt to create docker network addons-20210915012342-6768 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 01:24:07.315949    7717 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210915012342-6768
	I0915 01:24:07.387632    7717 network_create.go:90] docker network addons-20210915012342-6768 192.168.49.0/24 created
	I0915 01:24:07.387667    7717 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210915012342-6768" container
	I0915 01:24:07.387724    7717 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0915 01:24:07.421887    7717 cli_runner.go:115] Run: docker volume create addons-20210915012342-6768 --label name.minikube.sigs.k8s.io=addons-20210915012342-6768 --label created_by.minikube.sigs.k8s.io=true
	I0915 01:24:07.456567    7717 oci.go:102] Successfully created a docker volume addons-20210915012342-6768
	I0915 01:24:07.456638    7717 cli_runner.go:115] Run: docker run --rm --name addons-20210915012342-6768-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210915012342-6768 --entrypoint /usr/bin/test -v addons-20210915012342-6768:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib
	I0915 01:24:09.647715    7717 cli_runner.go:168] Completed: docker run --rm --name addons-20210915012342-6768-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210915012342-6768 --entrypoint /usr/bin/test -v addons-20210915012342-6768:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib: (2.191021089s)
	I0915 01:24:09.647763    7717 oci.go:106] Successfully prepared a docker volume addons-20210915012342-6768
	W0915 01:24:09.647796    7717 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0915 01:24:09.647808    7717 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0915 01:24:09.647812    7717 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 01:24:09.647841    7717 kic.go:179] Starting extracting preloaded images to volume ...
	I0915 01:24:09.647857    7717 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 01:24:09.647899    7717 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210915012342-6768:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 01:24:09.729679    7717 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210915012342-6768 --name addons-20210915012342-6768 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210915012342-6768 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210915012342-6768 --network addons-20210915012342-6768 --ip 192.168.49.2 --volume addons-20210915012342-6768:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56
	I0915 01:24:10.342564    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Running}}
	I0915 01:24:10.381353    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:10.423362    7717 cli_runner.go:115] Run: docker exec addons-20210915012342-6768 stat /var/lib/dpkg/alternatives/iptables
	I0915 01:24:10.549001    7717 oci.go:281] the created container "addons-20210915012342-6768" has a running status.
	I0915 01:24:10.549056    7717 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa...
	I0915 01:24:10.733442    7717 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 01:24:11.130073    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:11.169086    7717 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 01:24:11.169107    7717 kic_runner.go:115] Args: [docker exec --privileged addons-20210915012342-6768 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 01:24:13.139010    7717 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210915012342-6768:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir: (3.491067455s)
	I0915 01:24:13.139044    7717 kic.go:188] duration metric: took 3.491201 seconds to extract preloaded images to volume
	I0915 01:24:13.139124    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:13.174330    7717 machine.go:88] provisioning docker machine ...
	I0915 01:24:13.174364    7717 ubuntu.go:169] provisioning hostname "addons-20210915012342-6768"
	I0915 01:24:13.174419    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:13.207894    7717 main.go:130] libmachine: Using SSH client type: native
	I0915 01:24:13.208085    7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0915 01:24:13.208100    7717 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210915012342-6768 && echo "addons-20210915012342-6768" | sudo tee /etc/hostname
	I0915 01:24:13.395488    7717 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210915012342-6768
	
	I0915 01:24:13.395557    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:13.431035    7717 main.go:130] libmachine: Using SSH client type: native
	I0915 01:24:13.431185    7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0915 01:24:13.431207    7717 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210915012342-6768' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210915012342-6768/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210915012342-6768' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 01:24:13.535165    7717 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 01:24:13.535196    7717 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube}
	I0915 01:24:13.535220    7717 ubuntu.go:177] setting up certificates
	I0915 01:24:13.535232    7717 provision.go:83] configureAuth start
	I0915 01:24:13.535284    7717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210915012342-6768
	I0915 01:24:13.570457    7717 provision.go:138] copyHostCerts
	I0915 01:24:13.570528    7717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cert.pem (1123 bytes)
	I0915 01:24:13.570617    7717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/key.pem (1679 bytes)
	I0915 01:24:13.570666    7717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.pem (1078 bytes)
	I0915 01:24:13.570711    7717 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca-key.pem org=jenkins.addons-20210915012342-6768 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210915012342-6768]
	I0915 01:24:13.803214    7717 provision.go:172] copyRemoteCerts
	I0915 01:24:13.803261    7717 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 01:24:13.803290    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:13.839139    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:13.918310    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 01:24:13.935224    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0915 01:24:13.949906    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 01:24:13.964223    7717 provision.go:86] duration metric: configureAuth took 428.982888ms
	I0915 01:24:13.964242    7717 ubuntu.go:193] setting minikube options for container-runtime
	I0915 01:24:13.964386    7717 config.go:177] Loaded profile config "addons-20210915012342-6768": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 01:24:13.964430    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:13.999813    7717 main.go:130] libmachine: Using SSH client type: native
	I0915 01:24:13.999958    7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0915 01:24:13.999976    7717 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 01:24:14.105168    7717 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 01:24:14.105191    7717 ubuntu.go:71] root file system type: overlay
	I0915 01:24:14.105379    7717 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 01:24:14.105434    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:14.141278    7717 main.go:130] libmachine: Using SSH client type: native
	I0915 01:24:14.141417    7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0915 01:24:14.141475    7717 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 01:24:14.250866    7717 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 01:24:14.250942    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:14.287057    7717 main.go:130] libmachine: Using SSH client type: native
	I0915 01:24:14.287213    7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0915 01:24:14.287234    7717 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 01:24:14.858052    7717 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-09-15 01:24:14.247123693 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0915 01:24:14.858083    7717 machine.go:91] provisioned docker machine in 1.683733527s
	I0915 01:24:14.858094    7717 client.go:171] LocalClient.Create took 8.020386686s
	I0915 01:24:14.858104    7717 start.go:168] duration metric: libmachine.API.Create for "addons-20210915012342-6768" took 8.020432581s
	I0915 01:24:14.858113    7717 start.go:267] post-start starting for "addons-20210915012342-6768" (driver="docker")
	I0915 01:24:14.858118    7717 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 01:24:14.858175    7717 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 01:24:14.858214    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:14.893956    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:14.974875    7717 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 01:24:14.977367    7717 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 01:24:14.977390    7717 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 01:24:14.977401    7717 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 01:24:14.977408    7717 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 01:24:14.977418    7717 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/addons for local assets ...
	I0915 01:24:14.977469    7717 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/files for local assets ...
	I0915 01:24:14.977496    7717 start.go:270] post-start completed in 119.376941ms
	I0915 01:24:14.977758    7717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210915012342-6768
	I0915 01:24:15.012312    7717 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/config.json ...
	I0915 01:24:15.012510    7717 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 01:24:15.012569    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:15.047276    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:15.124397    7717 start.go:129] duration metric: createHost completed in 8.289186304s
	I0915 01:24:15.124425    7717 start.go:80] releasing machines lock for "addons-20210915012342-6768", held for 8.289299626s
	I0915 01:24:15.124490    7717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210915012342-6768
	I0915 01:24:15.159494    7717 ssh_runner.go:152] Run: systemctl --version
	I0915 01:24:15.159541    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:15.159553    7717 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 01:24:15.159594    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:15.200965    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:15.202135    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:15.333961    7717 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 01:24:15.342280    7717 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 01:24:15.350251    7717 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 01:24:15.350294    7717 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 01:24:15.357943    7717 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 01:24:15.368784    7717 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 01:24:15.425088    7717 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 01:24:15.477832    7717 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 01:24:15.485761    7717 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 01:24:15.537495    7717 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 01:24:15.545498    7717 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 01:24:15.581742    7717 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 01:24:15.620804    7717 out.go:204] * Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
	I0915 01:24:15.620881    7717 cli_runner.go:115] Run: docker network inspect addons-20210915012342-6768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 01:24:15.654470    7717 ssh_runner.go:152] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 01:24:15.657487    7717 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 01:24:15.665636    7717 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 01:24:15.665683    7717 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 01:24:15.693738    7717 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0915 01:24:15.693757    7717 docker.go:489] Images already preloaded, skipping extraction
	I0915 01:24:15.693791    7717 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 01:24:15.720708    7717 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0915 01:24:15.720729    7717 cache_images.go:78] Images are preloaded, skipping loading
	I0915 01:24:15.720773    7717 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I0915 01:24:15.794624    7717 cni.go:93] Creating CNI manager for ""
	I0915 01:24:15.794641    7717 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 01:24:15.794648    7717 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0915 01:24:15.794658    7717 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210915012342-6768 NodeName:addons-20210915012342-6768 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0915 01:24:15.794774    7717 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20210915012342-6768"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 01:24:15.794863    7717 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210915012342-6768 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.1 ClusterName:addons-20210915012342-6768 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0915 01:24:15.794928    7717 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
	I0915 01:24:15.802950    7717 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 01:24:15.802998    7717 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 01:24:15.809031    7717 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (352 bytes)
	I0915 01:24:15.820140    7717 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 01:24:15.830993    7717 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0915 01:24:15.841714    7717 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 01:24:15.844328    7717 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 01:24:15.852253    7717 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768 for IP: 192.168.49.2
	I0915 01:24:15.852288    7717 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.key
	I0915 01:24:15.994441    7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.crt ...
	I0915 01:24:15.994473    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.crt: {Name:mk82e6b53d2785698b6872502d05efcc2184b0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:15.994641    7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.key ...
	I0915 01:24:15.994653    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.key: {Name:mk3eb9792b7cf8e12aa1e54183d2ecab549452d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:15.994728    7717 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.key
	I0915 01:24:16.380381    7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.crt ...
	I0915 01:24:16.380419    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.crt: {Name:mkae95b129223c0a4b86eab2eee067267f086ef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:16.380619    7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.key ...
	I0915 01:24:16.380632    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.key: {Name:mkab485fae645d4fa089e28b4dd468821ba71f8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:16.380746    7717 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.key
	I0915 01:24:16.380758    7717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt with IP's: []
	I0915 01:24:16.564833    7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt ...
	I0915 01:24:16.564870    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: {Name:mk3e81de32efda483a4ab503975f0ec212d4e7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:16.565062    7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.key ...
	I0915 01:24:16.565077    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.key: {Name:mk995c7bd3a360efbd0d06dd026804113b452757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:16.565164    7717 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key.dd3b5fb2
	I0915 01:24:16.565175    7717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0915 01:24:16.750988    7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt.dd3b5fb2 ...
	I0915 01:24:16.751026    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt.dd3b5fb2: {Name:mk5de554520d0b626bf0e2bec0fe05f0f559dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:16.751207    7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key.dd3b5fb2 ...
	I0915 01:24:16.751219    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key.dd3b5fb2: {Name:mk315e0bd186039b3eff0de60eb16ab99d6ae1f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:16.751306    7717 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt
	I0915 01:24:16.751429    7717 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key
	I0915 01:24:16.751489    7717 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.key
	I0915 01:24:16.751498    7717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.crt with IP's: []
	I0915 01:24:16.951055    7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.crt ...
	I0915 01:24:16.951082    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.crt: {Name:mkd30313ce32e16a3a2ef08933646a28bb8e3826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:16.951251    7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.key ...
	I0915 01:24:16.951264    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.key: {Name:mke8365a3af75830a535fcade17439f64d264638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:16.951474    7717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca-key.pem (1679 bytes)
	I0915 01:24:16.951511    7717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem (1078 bytes)
	I0915 01:24:16.951533    7717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/cert.pem (1123 bytes)
	I0915 01:24:16.951552    7717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/key.pem (1679 bytes)
	I0915 01:24:16.952422    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0915 01:24:16.969110    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 01:24:16.984223    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 01:24:16.999307    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 01:24:17.013872    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 01:24:17.028189    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0915 01:24:17.042420    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 01:24:17.056678    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 01:24:17.070952    7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 01:24:17.085257    7717 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 01:24:17.095807    7717 ssh_runner.go:152] Run: openssl version
	I0915 01:24:17.100299    7717 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 01:24:17.108521    7717 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 01:24:17.111151    7717 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 01:24 /usr/share/ca-certificates/minikubeCA.pem
	I0915 01:24:17.111193    7717 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 01:24:17.115429    7717 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 01:24:17.121690    7717 kubeadm.go:390] StartCluster: {Name:addons-20210915012342-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:addons-20210915012342-6768 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:24:17.121782    7717 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 01:24:17.150462    7717 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 01:24:17.156694    7717 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 01:24:17.162528    7717 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0915 01:24:17.162577    7717 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 01:24:17.168324    7717 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 01:24:17.168354    7717 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 01:24:30.215708    7717 out.go:204]   - Generating certificates and keys ...
	I0915 01:24:30.218847    7717 out.go:204]   - Booting up control plane ...
	I0915 01:24:30.221495    7717 out.go:204]   - Configuring RBAC rules ...
	I0915 01:24:30.224120    7717 cni.go:93] Creating CNI manager for ""
	I0915 01:24:30.224137    7717 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 01:24:30.224166    7717 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 01:24:30.224307    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:30.224399    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3 minikube.k8s.io/name=addons-20210915012342-6768 minikube.k8s.io/updated_at=2021_09_15T01_24_30_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:30.564117    7717 ops.go:34] apiserver oom_adj: -16
	I0915 01:24:30.564201    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:31.114911    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:31.614576    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:32.114492    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:32.615099    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:33.114794    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:33.614356    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:34.115137    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:34.614947    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:35.115015    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:35.614368    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:36.114877    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:36.614425    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:37.115312    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:37.614796    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:38.114666    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:38.615236    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:39.115344    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:39.614535    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:40.114800    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:40.614934    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:41.114782    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:41.614898    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:42.115358    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:42.614597    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:43.114557    7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 01:24:43.171583    7717 kubeadm.go:985] duration metric: took 12.947329577s to wait for elevateKubeSystemPrivileges.
	I0915 01:24:43.171611    7717 kubeadm.go:392] StartCluster complete in 26.049927534s
	I0915 01:24:43.171627    7717 settings.go:142] acquiring lock: {Name:mk9e57581826ef1ab9c29fc377d83267ef74c695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:43.171746    7717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	I0915 01:24:43.172265    7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig: {Name:mkf4cafc535fa65fd368ee043668c4a421c567e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:24:43.688182    7717 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210915012342-6768" rescaled to 1
	I0915 01:24:43.688250    7717 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
	I0915 01:24:43.690008    7717 out.go:177] * Verifying Kubernetes components...
	I0915 01:24:43.690072    7717 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 01:24:43.688296    7717 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 01:24:43.688311    7717 addons.go:404] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0915 01:24:43.690178    7717 addons.go:65] Setting ingress=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690190    7717 addons.go:65] Setting metrics-server=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690198    7717 addons.go:153] Setting addon ingress=true in "addons-20210915012342-6768"
	I0915 01:24:43.690205    7717 addons.go:153] Setting addon metrics-server=true in "addons-20210915012342-6768"
	I0915 01:24:43.690216    7717 addons.go:65] Setting registry=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690231    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	I0915 01:24:43.690233    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	I0915 01:24:43.688453    7717 config.go:177] Loaded profile config "addons-20210915012342-6768": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 01:24:43.690248    7717 addons.go:65] Setting olm=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690257    7717 addons.go:153] Setting addon olm=true in "addons-20210915012342-6768"
	I0915 01:24:43.690282    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	I0915 01:24:43.690296    7717 addons.go:65] Setting helm-tiller=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690237    7717 addons.go:153] Setting addon registry=true in "addons-20210915012342-6768"
	I0915 01:24:43.690314    7717 addons.go:153] Setting addon helm-tiller=true in "addons-20210915012342-6768"
	I0915 01:24:43.690298    7717 addons.go:65] Setting default-storageclass=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690334    7717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210915012342-6768"
	I0915 01:24:43.690326    7717 addons.go:65] Setting storage-provisioner=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690348    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	I0915 01:24:43.690361    7717 addons.go:153] Setting addon storage-provisioner=true in "addons-20210915012342-6768"
	I0915 01:24:43.690359    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	W0915 01:24:43.690377    7717 addons.go:165] addon storage-provisioner should already be in state true
	I0915 01:24:43.690409    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	I0915 01:24:43.690680    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.690766    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.690773    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.690808    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.690830    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.690178    7717 addons.go:65] Setting volumesnapshots=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690865    7717 addons.go:153] Setting addon volumesnapshots=true in "addons-20210915012342-6768"
	I0915 01:24:43.690877    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.690883    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	I0915 01:24:43.690904    7717 addons.go:65] Setting csi-hostpath-driver=true in profile "addons-20210915012342-6768"
	I0915 01:24:43.690940    7717 addons.go:153] Setting addon csi-hostpath-driver=true in "addons-20210915012342-6768"
	I0915 01:24:43.690967    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	I0915 01:24:43.691294    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.691383    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.691391    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.806180    7717 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0915 01:24:43.806270    7717 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 01:24:43.806281    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0915 01:24:43.806340    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:43.811583    7717 out.go:177]   - Using image ghcr.io/helm/tiller:v2.16.12
	I0915 01:24:43.811706    7717 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 01:24:43.811715    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2423 bytes)
	I0915 01:24:43.811763    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:43.815294    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0915 01:24:43.816807    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0915 01:24:43.818230    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0915 01:24:43.820400    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0915 01:24:43.818420    7717 out.go:177]   - Using image quay.io/operatorhubio/catalog:latest
	I0915 01:24:43.824097    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0915 01:24:43.825571    7717 out.go:177]   - Using image quay.io/operator-framework/olm
	I0915 01:24:43.827045    7717 out.go:177]   - Using image registry:2.7.1
	I0915 01:24:43.828423    7717 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0915 01:24:43.828532    7717 addons.go:337] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 01:24:43.828543    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0915 01:24:43.828593    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:43.818260    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0915 01:24:43.824743    7717 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 01:24:43.833924    7717 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3
	I0915 01:24:43.836791    7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 01:24:43.837026    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 01:24:43.837090    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:43.837738    7717 node_ready.go:35] waiting up to 6m0s for node "addons-20210915012342-6768" to be "Ready" ...
	I0915 01:24:43.837865    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0915 01:24:43.837918    7717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 01:24:43.840820    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0915 01:24:43.838004    7717 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 01:24:43.842681    7717 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I0915 01:24:43.840998    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 01:24:43.841859    7717 addons.go:153] Setting addon default-storageclass=true in "addons-20210915012342-6768"
	I0915 01:24:43.842519    7717 node_ready.go:49] node "addons-20210915012342-6768" has status "Ready":"True"
	I0915 01:24:43.844083    7717 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I0915 01:24:43.844169    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	W0915 01:24:43.845438    7717 addons.go:165] addon default-storageclass should already be in state true
	I0915 01:24:43.845459    7717 node_ready.go:38] duration metric: took 7.683952ms waiting for node "addons-20210915012342-6768" to be "Ready" ...
	I0915 01:24:43.845469    7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
	I0915 01:24:43.845477    7717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 01:24:43.845615    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0915 01:24:43.846897    7717 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0915 01:24:43.846956    7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 01:24:43.846966    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 01:24:43.847014    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:43.846720    7717 addons.go:337] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 01:24:43.847095    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (17019 bytes)
	I0915 01:24:43.847139    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:43.852220    7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
	I0915 01:24:43.854093    7717 addons.go:337] installing /etc/kubernetes/addons/crds.yaml
	I0915 01:24:43.854140    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/crds.yaml (636901 bytes)
	I0915 01:24:43.854220    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:43.876195    7717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kq6bp" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:43.890788    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:43.895792    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:43.908616    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:43.920110    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:43.929207    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:43.950380    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:43.965377    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:43.966151    7717 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 01:24:43.966169    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 01:24:43.966208    7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
	I0915 01:24:43.973648    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:44.001603    7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
	I0915 01:24:44.227528    7717 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 01:24:44.227560    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 01:24:44.229546    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 01:24:44.233077    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 01:24:44.308815    7717 addons.go:337] installing /etc/kubernetes/addons/olm.yaml
	I0915 01:24:44.308840    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/olm.yaml (9929 bytes)
	I0915 01:24:44.310305    7717 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 01:24:44.310354    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 01:24:44.310482    7717 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 01:24:44.310498    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0915 01:24:44.314115    7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0915 01:24:44.314133    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0915 01:24:44.326256    7717 addons.go:337] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 01:24:44.326279    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 01:24:44.328964    7717 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 01:24:44.329023    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 01:24:44.415354    7717 start.go:729] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0915 01:24:44.415808    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 01:24:44.416743    7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 01:24:44.416793    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0915 01:24:44.417490    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0915 01:24:44.420921    7717 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 01:24:44.420940    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 01:24:44.422783    7717 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 01:24:44.422801    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0915 01:24:44.428595    7717 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 01:24:44.428612    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0915 01:24:44.431511    7717 addons.go:337] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 01:24:44.431528    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0915 01:24:44.509822    7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 01:24:44.509847    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0915 01:24:44.512135    7717 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 01:24:44.512191    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0915 01:24:44.520465    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 01:24:44.524728    7717 addons.go:337] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 01:24:44.524751    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0915 01:24:44.608775    7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 01:24:44.608805    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0915 01:24:44.609056    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 01:24:44.611034    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 01:24:44.725500    7717 addons.go:337] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 01:24:44.725530    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0915 01:24:44.730287    7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 01:24:44.730311    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0915 01:24:45.849684    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 01:24:45.850010    7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 01:24:45.850033    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0915 01:24:45.865539    7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 01:24:45.865565    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0915 01:24:45.880225    7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 01:24:45.880247    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0915 01:24:45.891905    7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0915 01:24:45.891926    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0915 01:24:45.905918    7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 01:24:45.905939    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0915 01:24:45.919002    7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0915 01:24:45.919025    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0915 01:24:45.932386    7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 01:24:45.932415    7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 01:24:45.943658    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 01:24:48.362733    7717 pod_ready.go:102] pod "coredns-78fcd69978-kq6bp" in "kube-system" namespace has status "Ready":"False"
	I0915 01:24:50.070211    7717 pod_ready.go:92] pod "coredns-78fcd69978-kq6bp" in "kube-system" namespace has status "Ready":"True"
	I0915 01:24:50.070236    7717 pod_ready.go:81] duration metric: took 6.194003932s waiting for pod "coredns-78fcd69978-kq6bp" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.070248    7717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-pmmqc" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.122129    7717 pod_ready.go:92] pod "coredns-78fcd69978-pmmqc" in "kube-system" namespace has status "Ready":"True"
	I0915 01:24:50.122157    7717 pod_ready.go:81] duration metric: took 51.902017ms waiting for pod "coredns-78fcd69978-pmmqc" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.122170    7717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.216473    7717 pod_ready.go:92] pod "etcd-addons-20210915012342-6768" in "kube-system" namespace has status "Ready":"True"
	I0915 01:24:50.216501    7717 pod_ready.go:81] duration metric: took 94.32156ms waiting for pod "etcd-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.216516    7717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.233771    7717 pod_ready.go:92] pod "kube-apiserver-addons-20210915012342-6768" in "kube-system" namespace has status "Ready":"True"
	I0915 01:24:50.233853    7717 pod_ready.go:81] duration metric: took 17.327005ms waiting for pod "kube-apiserver-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.233884    7717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.322054    7717 pod_ready.go:92] pod "kube-controller-manager-addons-20210915012342-6768" in "kube-system" namespace has status "Ready":"True"
	I0915 01:24:50.322084    7717 pod_ready.go:81] duration metric: took 88.176786ms waiting for pod "kube-controller-manager-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.322097    7717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xf8sd" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.614874    7717 pod_ready.go:92] pod "kube-proxy-xf8sd" in "kube-system" namespace has status "Ready":"True"
	I0915 01:24:50.614976    7717 pod_ready.go:81] duration metric: took 292.86788ms waiting for pod "kube-proxy-xf8sd" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.615012    7717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.715585    7717 pod_ready.go:92] pod "kube-scheduler-addons-20210915012342-6768" in "kube-system" namespace has status "Ready":"True"
	I0915 01:24:50.715611    7717 pod_ready.go:81] duration metric: took 100.54725ms waiting for pod "kube-scheduler-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
	I0915 01:24:50.715622    7717 pod_ready.go:38] duration metric: took 6.870128841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 01:24:50.715641    7717 api_server.go:50] waiting for apiserver process to appear ...
	I0915 01:24:50.715684    7717 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 01:24:52.226484    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.993376186s)
	I0915 01:24:52.226583    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.997008647s)
	I0915 01:24:52.226649    7717 addons.go:375] Verifying addon ingress=true in "addons-20210915012342-6768"
	I0915 01:24:52.226672    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.810806473s)
	I0915 01:24:52.228336    7717 out.go:177] * Verifying ingress addon...
	I0915 01:24:52.230618    7717 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 01:24:52.318693    7717 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 01:24:52.318720    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:52.908898    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:53.528904    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.008409506s)
	I0915 01:24:53.528975    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (9.111453263s)
	I0915 01:24:53.528977    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.919889177s)
	I0915 01:24:53.528993    7717 addons.go:375] Verifying addon registry=true in "addons-20210915012342-6768"
	W0915 01:24:53.528999    7717 addons.go:358] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0915 01:24:53.529030    7717 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0915 01:24:53.529117    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.91806092s)
	I0915 01:24:53.529139    7717 addons.go:375] Verifying addon metrics-server=true in "addons-20210915012342-6768"
	I0915 01:24:53.530827    7717 out.go:177] * Verifying registry addon...
	I0915 01:24:53.531429    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:53.529237    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.679516802s)
	W0915 01:24:53.531587    7717 addons.go:358] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0915 01:24:53.531610    7717 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0915 01:24:53.532912    7717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 01:24:53.627039    7717 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 01:24:53.627067    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:53.805972    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0915 01:24:53.828448    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:53.892708    7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 01:24:54.214453    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:54.329361    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:54.329646    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.385943177s)
	I0915 01:24:54.329816    7717 addons.go:375] Verifying addon csi-hostpath-driver=true in "addons-20210915012342-6768"
	I0915 01:24:54.329777    7717 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.614078644s)
	I0915 01:24:54.329993    7717 api_server.go:70] duration metric: took 10.641707566s to wait for apiserver process to appear ...
	I0915 01:24:54.330024    7717 api_server.go:86] waiting for apiserver healthz status ...
	I0915 01:24:54.330057    7717 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 01:24:54.331613    7717 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 01:24:54.333837    7717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 01:24:54.414030    7717 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 01:24:54.418035    7717 api_server.go:139] control plane version: v1.22.1
	I0915 01:24:54.418064    7717 api_server.go:129] duration metric: took 88.013428ms to wait for apiserver health ...
	I0915 01:24:54.418074    7717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 01:24:54.421304    7717 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 01:24:54.421341    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:54.430869    7717 system_pods.go:59] 19 kube-system pods found
	I0915 01:24:54.430908    7717 system_pods.go:61] "coredns-78fcd69978-kq6bp" [8ad7f1c9-28f3-46f9-9236-584bc24602ed] Running
	I0915 01:24:54.430917    7717 system_pods.go:61] "coredns-78fcd69978-pmmqc" [4eae89e0-944f-47e4-90dc-bcb15825ce64] Running
	I0915 01:24:54.430925    7717 system_pods.go:61] "csi-hostpath-attacher-0" [daa12cf1-a96e-4844-a77f-f18ec9ae48b6] Pending
	I0915 01:24:54.430933    7717 system_pods.go:61] "csi-hostpath-provisioner-0" [6c3bfda3-b873-4f59-9154-1c1eebec79c6] Pending
	I0915 01:24:54.430941    7717 system_pods.go:61] "csi-hostpath-resizer-0" [7917d661-b4e0-4df5-982d-eec18d58a5c7] Pending
	I0915 01:24:54.430953    7717 system_pods.go:61] "csi-hostpath-snapshotter-0" [82c44e60-ff9d-421b-9a25-ff2c4e39cf20] Pending
	I0915 01:24:54.430960    7717 system_pods.go:61] "csi-hostpathplugin-0" [d7439a1e-b83f-42da-a10b-9b53be29e4a5] Pending
	I0915 01:24:54.430968    7717 system_pods.go:61] "etcd-addons-20210915012342-6768" [e81b6548-9356-458a-9287-10f8ee37d852] Running
	I0915 01:24:54.430975    7717 system_pods.go:61] "kube-apiserver-addons-20210915012342-6768" [cf986ab2-560f-42d1-a77c-10379e41992e] Running
	I0915 01:24:54.430984    7717 system_pods.go:61] "kube-controller-manager-addons-20210915012342-6768" [641f1bdb-2828-4878-9bbc-1451be499385] Running
	I0915 01:24:54.430991    7717 system_pods.go:61] "kube-proxy-xf8sd" [b0c1ab2b-6d53-4f60-be02-018babe698ea] Running
	I0915 01:24:54.430998    7717 system_pods.go:61] "kube-scheduler-addons-20210915012342-6768" [d401b548-7760-4572-8d56-bdec28034c57] Running
	I0915 01:24:54.431011    7717 system_pods.go:61] "metrics-server-77c99ccb96-wpjcb" [47810a13-c9ae-42d6-a4b8-981ff0c391d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 01:24:54.431023    7717 system_pods.go:61] "registry-d2wk4" [89a45b74-b58c-468a-8c45-d173530c049f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 01:24:54.431034    7717 system_pods.go:61] "registry-proxy-vhhfv" [985239f4-f991-4990-bf20-39effa769ac7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 01:24:54.431046    7717 system_pods.go:61] "snapshot-controller-989f9ddc8-ff7mh" [e8b3528d-9cc8-4501-97dd-385861a0b54c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 01:24:54.431055    7717 system_pods.go:61] "snapshot-controller-989f9ddc8-wxnqh" [c198cc5e-bdb4-482a-a3a6-af7f9345c6e4] Pending
	I0915 01:24:54.431065    7717 system_pods.go:61] "storage-provisioner" [e0d5c4d0-79f3-4daf-b2a3-dfed5458ee38] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0915 01:24:54.431075    7717 system_pods.go:61] "tiller-deploy-7d9fb5c894-gqw79" [7780487d-b571-401f-a059-bb6ed78f19c1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 01:24:54.431083    7717 system_pods.go:74] duration metric: took 13.003689ms to wait for pod list to return data ...
	I0915 01:24:54.431093    7717 default_sa.go:34] waiting for default service account to be created ...
	I0915 01:24:54.511268    7717 default_sa.go:45] found service account: "default"
	I0915 01:24:54.511301    7717 default_sa.go:55] duration metric: took 80.200864ms for default service account to be created ...
	I0915 01:24:54.511313    7717 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 01:24:54.527991    7717 system_pods.go:86] 19 kube-system pods found
	I0915 01:24:54.528072    7717 system_pods.go:89] "coredns-78fcd69978-kq6bp" [8ad7f1c9-28f3-46f9-9236-584bc24602ed] Running
	I0915 01:24:54.528096    7717 system_pods.go:89] "coredns-78fcd69978-pmmqc" [4eae89e0-944f-47e4-90dc-bcb15825ce64] Running
	I0915 01:24:54.528115    7717 system_pods.go:89] "csi-hostpath-attacher-0" [daa12cf1-a96e-4844-a77f-f18ec9ae48b6] Pending
	I0915 01:24:54.528135    7717 system_pods.go:89] "csi-hostpath-provisioner-0" [6c3bfda3-b873-4f59-9154-1c1eebec79c6] Pending
	I0915 01:24:54.528155    7717 system_pods.go:89] "csi-hostpath-resizer-0" [7917d661-b4e0-4df5-982d-eec18d58a5c7] Pending
	I0915 01:24:54.528174    7717 system_pods.go:89] "csi-hostpath-snapshotter-0" [82c44e60-ff9d-421b-9a25-ff2c4e39cf20] Pending
	I0915 01:24:54.528192    7717 system_pods.go:89] "csi-hostpathplugin-0" [d7439a1e-b83f-42da-a10b-9b53be29e4a5] Pending
	I0915 01:24:54.528211    7717 system_pods.go:89] "etcd-addons-20210915012342-6768" [e81b6548-9356-458a-9287-10f8ee37d852] Running
	I0915 01:24:54.528232    7717 system_pods.go:89] "kube-apiserver-addons-20210915012342-6768" [cf986ab2-560f-42d1-a77c-10379e41992e] Running
	I0915 01:24:54.528253    7717 system_pods.go:89] "kube-controller-manager-addons-20210915012342-6768" [641f1bdb-2828-4878-9bbc-1451be499385] Running
	I0915 01:24:54.528273    7717 system_pods.go:89] "kube-proxy-xf8sd" [b0c1ab2b-6d53-4f60-be02-018babe698ea] Running
	I0915 01:24:54.528293    7717 system_pods.go:89] "kube-scheduler-addons-20210915012342-6768" [d401b548-7760-4572-8d56-bdec28034c57] Running
	I0915 01:24:54.528320    7717 system_pods.go:89] "metrics-server-77c99ccb96-wpjcb" [47810a13-c9ae-42d6-a4b8-981ff0c391d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 01:24:54.528345    7717 system_pods.go:89] "registry-d2wk4" [89a45b74-b58c-468a-8c45-d173530c049f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 01:24:54.528375    7717 system_pods.go:89] "registry-proxy-vhhfv" [985239f4-f991-4990-bf20-39effa769ac7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 01:24:54.528399    7717 system_pods.go:89] "snapshot-controller-989f9ddc8-ff7mh" [e8b3528d-9cc8-4501-97dd-385861a0b54c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 01:24:54.528420    7717 system_pods.go:89] "snapshot-controller-989f9ddc8-wxnqh" [c198cc5e-bdb4-482a-a3a6-af7f9345c6e4] Pending
	I0915 01:24:54.528443    7717 system_pods.go:89] "storage-provisioner" [e0d5c4d0-79f3-4daf-b2a3-dfed5458ee38] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0915 01:24:54.528465    7717 system_pods.go:89] "tiller-deploy-7d9fb5c894-gqw79" [7780487d-b571-401f-a059-bb6ed78f19c1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 01:24:54.528494    7717 system_pods.go:126] duration metric: took 17.173554ms to wait for k8s-apps to be running ...
	I0915 01:24:54.528516    7717 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 01:24:54.528575    7717 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 01:24:54.712839    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:54.823121    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:55.010753    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:55.132002    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:55.324109    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:55.426879    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:55.632608    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:55.823798    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:55.927259    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:56.134020    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:56.323520    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:56.431178    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:56.632217    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:56.823999    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:56.929994    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:57.327624    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:57.328202    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:57.426674    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:57.631364    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:57.827886    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:57.928145    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:58.317054    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:58.411813    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:58.427498    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:58.631354    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:58.718964    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (4.912956583s)
	I0915 01:24:58.719140    7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.826388801s)
	I0915 01:24:58.719162    7717 ssh_runner.go:192] Completed: sudo systemctl is-active --quiet service kubelet: (4.190555614s)
	I0915 01:24:58.719182    7717 system_svc.go:56] duration metric: took 4.190662202s WaitForService to wait for kubelet.
	I0915 01:24:58.719192    7717 kubeadm.go:547] duration metric: took 15.030911841s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 01:24:58.719222    7717 node_conditions.go:102] verifying NodePressure condition ...
	I0915 01:24:58.722617    7717 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0915 01:24:58.722647    7717 node_conditions.go:123] node cpu capacity is 8
	I0915 01:24:58.722664    7717 node_conditions.go:105] duration metric: took 3.434293ms to run NodePressure ...
	I0915 01:24:58.722676    7717 start.go:231] waiting for startup goroutines ...
	I0915 01:24:58.822654    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:58.926707    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:59.132436    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:59.323380    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:59.432238    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:24:59.631373    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:24:59.822148    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:24:59.927068    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:00.131271    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:00.322618    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:00.427247    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:00.632482    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:00.822378    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:00.926371    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:01.131769    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:01.321461    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:01.426401    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:01.631514    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:01.822270    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:01.925910    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:02.131102    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:02.322479    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:02.426814    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:02.631710    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:02.822565    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:02.926580    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:03.132138    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:03.322327    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:03.425763    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:03.630716    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:03.822940    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:03.925954    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:04.131425    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:04.322998    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:04.427359    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:04.633795    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:04.822456    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:04.926493    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:05.130906    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:05.322938    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:05.425885    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:05.630681    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:05.822636    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:05.926232    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:06.131728    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:06.322644    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:06.426800    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:06.631296    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:06.822178    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:06.925942    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:07.130718    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:07.322537    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:07.426377    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:07.631345    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:07.822084    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:07.925806    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:08.130693    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:08.322799    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:08.425772    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:08.630765    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:08.822801    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:08.925477    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:09.131245    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:09.322209    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:09.425889    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:09.631060    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:09.822449    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:09.926105    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:10.135377    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:10.323234    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:10.426561    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:10.631840    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:10.823939    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:10.929475    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:11.131659    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:11.323232    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:11.426397    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:11.632062    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:11.822512    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:11.926521    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:12.131813    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:12.323130    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:12.430949    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:12.631580    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:12.822895    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:12.928543    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:13.133814    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:13.322772    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:13.426817    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:13.631393    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:13.823393    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:13.926426    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:14.132149    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:14.322568    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:14.427491    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:14.631752    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:14.822357    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:14.926196    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:15.131582    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 01:25:15.323112    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:15.427099    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:15.631092    7717 kapi.go:108] duration metric: took 22.098175317s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 01:25:15.822161    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:15.925934    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:16.322695    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:16.425527    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:16.822114    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:16.925749    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:17.322605    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:17.426164    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:17.821917    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:17.925354    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:18.322379    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:18.426193    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:18.822373    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:18.925814    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:19.322679    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:19.426332    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:19.822553    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:19.925553    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:20.322266    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:20.426324    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:20.821524    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:20.927034    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:21.322997    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:21.426650    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:21.823077    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:21.926275    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:22.321918    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:22.426734    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:22.823033    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:22.926909    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:23.323070    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:23.426795    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:23.823047    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:23.926986    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:24.323186    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:24.426780    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:24.824097    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:24.926283    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:25.322970    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:25.426609    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:25.822177    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:25.925828    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:26.322361    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:26.426287    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:26.822283    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:26.930396    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:27.326926    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:27.511664    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:27.822096    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:27.925880    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:28.322673    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:28.427154    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:28.821818    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:28.927831    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:29.321948    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:29.425965    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:29.822472    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:29.926839    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:30.322733    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:30.427056    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:30.824585    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:30.926356    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:31.321456    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:31.437011    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:31.821868    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:31.926229    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:32.322468    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:32.426279    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:32.822201    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:32.925430    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:33.322082    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:33.425328    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:33.821749    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:33.926281    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:34.322605    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:34.426647    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:34.822719    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:34.926368    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:35.322316    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:35.425572    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:35.822289    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:35.925885    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:36.322416    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:36.426361    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:36.823858    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:36.925550    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:37.321940    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:37.426200    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:37.822938    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:37.926483    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:38.322090    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:38.425596    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:38.822029    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:38.926972    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:39.322124    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:39.426802    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:39.821674    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:39.925750    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:40.322708    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:40.426376    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:40.822517    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:40.926527    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:41.322864    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:41.426473    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:41.821895    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:41.927066    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:42.322084    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:42.427037    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:42.822065    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:42.926167    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:43.321526    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:43.425935    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:43.822384    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:43.926491    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:44.323732    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:44.426261    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:44.823018    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:44.926272    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:45.323034    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:45.426847    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:45.822915    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:45.926530    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:46.322111    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:46.426690    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:46.822143    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:46.926141    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:47.322950    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:47.426007    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:47.822633    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:47.926148    7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 01:25:48.322351    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:48.425990    7717 kapi.go:108] duration metric: took 54.092150944s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 01:25:48.822736    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:49.322112    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:49.822283    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:50.322558    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:50.822571    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:51.322712    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:51.822809    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:52.321841    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:52.821985    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:53.322924    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:53.822338    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:54.322769    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:54.823000    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:55.322150    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:55.822448    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:57.067289    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:57.323014    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:57.823245    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:58.322944    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:58.823212    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:59.322472    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:25:59.823142    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:26:00.323320    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:26:00.822926    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:26:01.322984    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:26:01.822237    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:26:02.323226    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:26:02.822827    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:26:03.322048    7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 01:26:03.822051    7717 kapi.go:108] duration metric: took 1m11.591429322s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 01:26:03.823931    7717 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, helm-tiller, metrics-server, olm, volumesnapshots, registry, csi-hostpath-driver, ingress
	I0915 01:26:03.823957    7717 addons.go:406] enableAddons completed in 1m20.135647286s
	I0915 01:26:03.872049    7717 start.go:462] kubectl: 1.20.5, cluster: 1.22.1 (minor skew: 2)
	I0915 01:26:03.873630    7717 out.go:177] 
	W0915 01:26:03.873770    7717 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 01:26:03.875759    7717 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 01:26:03.877156    7717 out.go:177] * Done! kubectl is now configured to use "addons-20210915012342-6768" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 01:24:10 UTC, end at Wed 2021-09-15 01:33:19 UTC. --
	Sep 15 01:26:21 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:21.778062870Z" level=info msg="ignoring event" container=9193947f6df47d488fc9c1b17f5b0f6ebf6904906a2277b8810f6ff2d57e33a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:21 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:21.830744290Z" level=info msg="ignoring event" container=186feb1e4f0fc3bf20ada56e40f108afa91c6eaafd1a95c50bf7668ba1545f3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:22 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:22.625270992Z" level=info msg="ignoring event" container=8b0de9308e0cf9099d2f1ad5469c3829db02bf01b916f13f27ef86e8b8aa0c35 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:22 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:22.625329846Z" level=info msg="ignoring event" container=1f725407e8b365bedc43e8ee1d6e10e367f3ccefe22cdc66a248041549e2a3ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:23 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:23.147699611Z" level=warning msg="reference for unknown type: " digest="sha256:be9661afbd47e4042bee1cb48cae858cc2f4b4e121340ee69fdc0013aeffcca4" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:be9661afbd47e4042bee1cb48cae858cc2f4b4e121340ee69fdc0013aeffcca4"
	Sep 15 01:26:30 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:30.427545879Z" level=info msg="ignoring event" container=809202351b32da83bd09d791d6b15f139813461258763b5a8b83db38504c34da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:31 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:31.623688678Z" level=info msg="ignoring event" container=3a614138064e3f3931786dabb374718f05df53b20bd98cef972022c8a8f1d219 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:33 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:33.527495741Z" level=info msg="ignoring event" container=df96358491977caebb7dfdce107da169a20dba5455ea8fffa78d5c9f5c6cdf65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:33 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:33.726454242Z" level=info msg="ignoring event" container=f10ad361b57219adec35985225d252a36d27b6e7b45092f5f55df95eb209430b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:35 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:35.958138080Z" level=info msg="ignoring event" container=ec8dcf2cd6eae34e29b838506c5c0166e7f558be877390384eeef411949dae75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:37 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:37.012692116Z" level=info msg="ignoring event" container=e77bc9b2c521a6d77f80c9af01bfe088dd94f216c55156f4e3741f1482c0df43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:53 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:53.329510673Z" level=info msg="ignoring event" container=4263b5374e25373ac2f0b30d7278af28718091aca6da880bbcc053f204bffecd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:54 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:54.433662598Z" level=info msg="ignoring event" container=22fc4717dc4b4d2d67033a3a33a819ec8f0aba87cbeff2a909778a78bfbf3983 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.317551853Z" level=info msg="ignoring event" container=e1934f56d6f4b370d7c6f9577455f2f210633fb88cec8e2c4d7500f1dd9e8957 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.343047518Z" level=info msg="ignoring event" container=22ac0eb72b29f3781170ee604ae114856560f0db07edeebb3363e1b446ce4890 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.447395487Z" level=info msg="ignoring event" container=d5ff237173b5932ad525ea265704baacf1f6e1bccf9eabffc1e72c45daa5efe1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.485134133Z" level=info msg="ignoring event" container=3834718a46bb7b7bd05dc5718310a481df5b8ee62c2da61212aac4d07eeeadab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.958465119Z" level=info msg="ignoring event" container=c527e733b084c5956b9674f3c16ce3d3b862c2751df48405aac339fb81ce6743 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:56 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:56.967816274Z" level=info msg="ignoring event" container=4b6d1533143217eb946f35ab5a14506836d14f88503596262c525404f4315110 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:57 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:57.325277029Z" level=info msg="ignoring event" container=814778a55cbffd769c5a36a7c39fdd21f959c7e92269d8141baea318659b1e03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:26:57 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:57.434624268Z" level=info msg="ignoring event" container=0e1b827a6c82cceff9c46f6dfc74bfe16e2c4fa11a09de56540c5a31c3237485 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:27:16 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:27:16.471763950Z" level=info msg="ignoring event" container=f650d8135069b7318befad394c739de2bec28c6df3a9f353a5bed0e46e0d43d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:27:16 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:27:16.582028100Z" level=info msg="ignoring event" container=c00b87888be2dafb25f38ed66cf4e1ab5b6405ac1b7f6959066712942d6c8ccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:27:18 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:27:18.242446129Z" level=info msg="ignoring event" container=f4bb935650aed2690df7a91b21305ce230f2c1d35154bd6fe9dd349786a30bc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 01:27:18 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:27:18.339388306Z" level=info msg="ignoring event" container=3ffa766d7dc37a1fb01db6052d9cbd2a58b5b05d346371fc22e9bb502585518d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                           CREATED             STATE               NAME                                     ATTEMPT             POD ID
	5d1e36825d225       nginx@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3                                                                   6 minutes ago       Running             nginx                                    0                   aadc41f12aa62
	6a2921c82137e       europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8   6 minutes ago       Running             private-image-eu                         0                   b5128bce8bd9c
	520e510400113       quay.io/open-cluster-management/registration-operator@sha256:6aa2c4972f8526bffd1678d121ea19d4409feec2ad3db9a93f2ab06a7b1be7ef                   6 minutes ago       Running             registration-operator                    0                   2481752cb8655
	7cbae6523d919       quay.io/open-cluster-management/registration-operator@sha256:6aa2c4972f8526bffd1678d121ea19d4409feec2ad3db9a93f2ab06a7b1be7ef                   6 minutes ago       Running             registration-operator                    0                   ab1ff66c00ace
	046d462030e9d       quay.io/open-cluster-management/registration-operator@sha256:6aa2c4972f8526bffd1678d121ea19d4409feec2ad3db9a93f2ab06a7b1be7ef                   6 minutes ago       Running             registration-operator                    0                   601125b20b8d8
	7d2c9db635690       us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8                6 minutes ago       Running             private-image                            0                   2279760bc97f8
	ec8dcf2cd6eae       quay.io/operator-framework/configmap-operator-registry@sha256:c42f92d2ef7953545c3b03aeebf39a00bbb16f40c0d2177561eb01a7f9eae32b                  6 minutes ago       Exited              extract                                  0                   e77bc9b2c521a
	3a614138064e3       quay.io/operatorhubio/cluster-manager@sha256:a9225e745539308dbb7ff46c785dcacb9b1e5609f84a5557239a7ab8fc1906c1                                   6 minutes ago       Exited              pull                                     0                   e77bc9b2c521a
	9c603683ecfd3       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1                                                                 6 minutes ago       Running             busybox                                  0                   f6f2f6595cb80
	809202351b32d       518fd05ba6b5b                                                                                                                                   6 minutes ago       Exited              util                                     0                   e77bc9b2c521a
	d0c6679691651       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:be9661afbd47e4042bee1cb48cae858cc2f4b4e121340ee69fdc0013aeffcca4                                    6 minutes ago       Running             gcp-auth                                 0                   5a6bc363c963c
	186feb1e4f0fc       17e55ec30f203                                                                                                                                   6 minutes ago       Exited              patch                                    0                   8b0de9308e0cf
	9193947f6df47       17e55ec30f203                                                                                                                                   6 minutes ago       Exited              create                                   0                   1f725407e8b36
	5557c478a8991       k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994                                    7 minutes ago       Running             liveness-probe                           0                   ea76608e8f072
	e60935786007c       k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659                                   7 minutes ago       Running             hostpath                                 0                   ea76608e8f072
	8267ec18deb8a       k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108                        7 minutes ago       Running             node-driver-registrar                    0                   ea76608e8f072
	f00612f962f87       quay.io/operatorhubio/catalog@sha256:2c035752603aa817420c9964a8c1cc223e1acf8f9a6f07f05c53d75fa03c9125                                           7 minutes ago       Running             registry-server                          0                   d3b81c04d190c
	6977f2c77abac       k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16           7 minutes ago       Running             csi-external-health-monitor-controller   0                   ea76608e8f072
	936c5a9129b77       k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782                                  7 minutes ago       Running             csi-snapshotter                          0                   b2f1050b60a84
	eea707d225445       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                                          7 minutes ago       Running             packageserver                            0                   dad2d1e917c3a
	9cd368ff987d8       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                                          7 minutes ago       Running             packageserver                            0                   920903b95a9cc
	06110a635db22       k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09                                     7 minutes ago       Running             csi-attacher                             0                   1cd1672c6a6b5
	0eb43f8f2c2b9       k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2                                  7 minutes ago       Running             csi-provisioner                          0                   8f290c522d5a5
	38b151e3a393a       k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a                                      7 minutes ago       Running             csi-resizer                              0                   b4cee9848f970
	25ad1ae1b724f       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                                          7 minutes ago       Running             catalog-operator                         0                   de1c17a793c74
	025ac698fcc7f       k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02                7 minutes ago       Running             csi-external-health-monitor-agent        0                   ea76608e8f072
	b01930895e3c7       quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed                                          7 minutes ago       Running             olm-operator                             0                   78e088f3fc1a5
	4e76cf6e6db03       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                              7 minutes ago       Running             volume-snapshot-controller               0                   4982f7797ee55
	2afc322d8da78       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                              7 minutes ago       Running             volume-snapshot-controller               0                   5beb8c82481a6
	8d912400cc21c       6e38f40d628db                                                                                                                                   8 minutes ago       Running             storage-provisioner                      0                   0c5e12dfe9290
	16b5a168478c7       8d147537fb7d1                                                                                                                                   8 minutes ago       Running             coredns                                  0                   7ab4a132ed1e7
	a51365e4d4520       36c4ebbc9d979                                                                                                                                   8 minutes ago       Running             kube-proxy                               0                   c65dc1df66da1
	f79b2fc97e029       aca5ededae9c8                                                                                                                                   8 minutes ago       Running             kube-scheduler                           0                   f53e4b45d356a
	6465e9569761d       0048118155842                                                                                                                                   8 minutes ago       Running             etcd                                     0                   a9cf829977d4f
	a591528a3fbd5       f30469a2491a5                                                                                                                                   8 minutes ago       Running             kube-apiserver                           0                   12ec9f889db74
	e3b129b7bcd19       6e002eb89a881                                                                                                                                   8 minutes ago       Running             kube-controller-manager                  0                   3a866ddd96b4a
	
	* 
	* ==> coredns [16b5a168478c] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210915012342-6768
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210915012342-6768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3
	                    minikube.k8s.io/name=addons-20210915012342-6768
	                    minikube.k8s.io/updated_at=2021_09_15T01_24_30_0700
	                    minikube.k8s.io/version=v1.23.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210915012342-6768
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210915012342-6768"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Sep 2021 01:24:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210915012342-6768
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Sep 2021 01:33:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Sep 2021 01:32:32 +0000   Wed, 15 Sep 2021 01:24:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Sep 2021 01:32:32 +0000   Wed, 15 Sep 2021 01:24:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Sep 2021 01:32:32 +0000   Wed, 15 Sep 2021 01:24:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Sep 2021 01:32:32 +0000   Wed, 15 Sep 2021 01:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210915012342-6768
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b5e5cdd53d44f5ab575bb522d42acca
	  System UUID:                7a73b86e-8e86-49cc-8445-3e859b641b86
	  Boot ID:                    688de29f-953b-46f5-823d-9be4668e8e77
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.1
	  Kube-Proxy Version:         v1.22.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  default                     nginx                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  default                     private-image-7ff9c8c74f-zr6nw                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  default                     private-image-eu-5956d58f9f-s4zkt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  default                     task-pv-pod-restore                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  gcp-auth                    gcp-auth-f6f59cc7c-qvf6p                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  kube-system                 coredns-78fcd69978-pmmqc                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m37s
	  kube-system                 csi-hostpath-attacher-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 csi-hostpath-provisioner-0                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 csi-hostpath-resizer-0                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 csi-hostpath-snapshotter-0                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 csi-hostpathplugin-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 etcd-addons-20210915012342-6768                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m49s
	  kube-system                 kube-apiserver-addons-20210915012342-6768             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	  kube-system                 kube-controller-manager-addons-20210915012342-6768    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	  kube-system                 kube-proxy-xf8sd                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-scheduler-addons-20210915012342-6768             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	  kube-system                 snapshot-controller-989f9ddc8-ff7mh                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 snapshot-controller-989f9ddc8-wxnqh                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  my-etcd                     cluster-manager-794c6cc889-4lwmf                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       0 (0%!)(MISSING)         6m41s
	  my-etcd                     cluster-manager-794c6cc889-ldv66                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       0 (0%!)(MISSING)         6m41s
	  my-etcd                     cluster-manager-794c6cc889-x97lm                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       0 (0%!)(MISSING)         6m41s
	  olm                         catalog-operator-6d578c5764-4q694                     10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         8m26s
	  olm                         olm-operator-5b58594fc8-d98tv                         10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m26s
	  olm                         operatorhubio-catalog-jzdmc                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m49s
	  olm                         packageserver-5dc55c7c59-pb47t                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m53s
	  olm                         packageserver-5dc55c7c59-qsz8l                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1100m (13%!)(MISSING)  0 (0%!)(MISSING)
	  memory             944Mi (2%!)(MISSING)   170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From     Message
	  ----    ------                   ----                   ----     -------
	  Normal  Starting                 8m59s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m58s (x4 over 8m58s)  kubelet  Node addons-20210915012342-6768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x4 over 8m58s)  kubelet  Node addons-20210915012342-6768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x3 over 8m58s)  kubelet  Node addons-20210915012342-6768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m58s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 8m49s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m49s                  kubelet  Node addons-20210915012342-6768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s                  kubelet  Node addons-20210915012342-6768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s                  kubelet  Node addons-20210915012342-6768 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             8m49s                  kubelet  Node addons-20210915012342-6768 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  8m49s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m39s                  kubelet  Node addons-20210915012342-6768 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Sep15 01:17]  #2
	[  +0.004034]  #3
	[  +0.004037]  #4
	[  +0.003706] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.002514] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002963]  #5
	[  +0.003903]  #6
	[  +0.004267]  #7
	[  +0.079578] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.766714] i8042: Warning: Keylock active
	[  +0.338122] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
	[  +0.008632] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
	[  +0.018013] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
	[  +0.017992] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
	[  +2.917583] aufs: loading out-of-tree module taints kernel.
	[Sep15 01:24] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [6465e9569761] <==
	* {"level":"warn","ts":"2021-09-15T01:26:13.824Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"404.826218ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T01:26:13.824Z","caller":"traceutil/trace.go:171","msg":"trace[1770283280] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1299; }","duration":"404.854493ms","start":"2021-09-15T01:26:13.419Z","end":"2021-09-15T01:26:13.824Z","steps":["trace[1770283280] 'agreement among raft nodes before linearized reading'  (duration: 404.814161ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T01:26:13.824Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"395.912704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2021-09-15T01:26:13.824Z","caller":"traceutil/trace.go:171","msg":"trace[66348658] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1299; }","duration":"395.950025ms","start":"2021-09-15T01:26:13.428Z","end":"2021-09-15T01:26:13.824Z","steps":["trace[66348658] 'agreement among raft nodes before linearized reading'  (duration: 395.887826ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T01:26:13.824Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:13.428Z","time spent":"395.987814ms","remote":"127.0.0.1:41036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1152,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2021-09-15T01:26:29.341Z","caller":"traceutil/trace.go:171","msg":"trace[1499032662] linearizableReadLoop","detail":"{readStateIndex:1509; appliedIndex:1509; }","duration":"326.667885ms","start":"2021-09-15T01:26:29.014Z","end":"2021-09-15T01:26:29.341Z","steps":["trace[1499032662] 'read index received'  (duration: 326.660582ms)","trace[1499032662] 'applied index is now lower than readState.Index'  (duration: 6.259µs)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T01:26:29.341Z","caller":"traceutil/trace.go:171","msg":"trace[369463855] transaction","detail":"{read_only:false; response_revision:1435; number_of_response:1; }","duration":"181.710994ms","start":"2021-09-15T01:26:29.159Z","end":"2021-09-15T01:26:29.341Z","steps":["trace[369463855] 'process raft request'  (duration: 181.394959ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T01:26:29.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"327.01863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2516"}
	{"level":"info","ts":"2021-09-15T01:26:29.341Z","caller":"traceutil/trace.go:171","msg":"trace[735560544] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1434; }","duration":"327.091382ms","start":"2021-09-15T01:26:29.014Z","end":"2021-09-15T01:26:29.341Z","steps":["trace[735560544] 'agreement among raft nodes before linearized reading'  (duration: 326.781489ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T01:26:29.341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:29.014Z","time spent":"327.13841ms","remote":"127.0.0.1:41040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":2540,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2021-09-15T01:26:29.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"318.34578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2021-09-15T01:26:29.341Z","caller":"traceutil/trace.go:171","msg":"trace[1460863496] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1435; }","duration":"318.385128ms","start":"2021-09-15T01:26:29.023Z","end":"2021-09-15T01:26:29.341Z","steps":["trace[1460863496] 'agreement among raft nodes before linearized reading'  (duration: 318.266948ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T01:26:29.341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:29.023Z","time spent":"318.422406ms","remote":"127.0.0.1:41106","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":91,"response size":31,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true "}
	{"level":"info","ts":"2021-09-15T01:26:34.569Z","caller":"traceutil/trace.go:171","msg":"trace[1200632049] linearizableReadLoop","detail":"{readStateIndex:1593; appliedIndex:1593; }","duration":"355.117105ms","start":"2021-09-15T01:26:34.214Z","end":"2021-09-15T01:26:34.569Z","steps":["trace[1200632049] 'read index received'  (duration: 355.109079ms)","trace[1200632049] 'applied index is now lower than readState.Index'  (duration: 6.615µs)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T01:26:34.699Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"484.399027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/operators.coreos.com/operatorgroups/my-etcd/\" range_end:\"/registry/operators.coreos.com/operatorgroups/my-etcd0\" ","response":"range_response_count:1 size:919"}
	{"level":"info","ts":"2021-09-15T01:26:34.699Z","caller":"traceutil/trace.go:171","msg":"trace[1257760560] range","detail":"{range_begin:/registry/operators.coreos.com/operatorgroups/my-etcd/; range_end:/registry/operators.coreos.com/operatorgroups/my-etcd0; response_count:1; response_revision:1514; }","duration":"484.484275ms","start":"2021-09-15T01:26:34.214Z","end":"2021-09-15T01:26:34.699Z","steps":["trace[1257760560] 'agreement among raft nodes before linearized reading'  (duration: 355.215187ms)","trace[1257760560] 'range keys from in-memory index tree'  (duration: 129.148361ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T01:26:34.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:34.214Z","time spent":"484.546308ms","remote":"127.0.0.1:41502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":112,"response count":1,"response size":943,"request content":"key:\"/registry/operators.coreos.com/operatorgroups/my-etcd/\" range_end:\"/registry/operators.coreos.com/operatorgroups/my-etcd0\" "}
	{"level":"warn","ts":"2021-09-15T01:26:34.699Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.31156ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128007659800411011 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/operators.coreos.com/subscriptions/my-etcd/cluster-manager\" mod_revision:1510 > success:<request_put:<key:\"/registry/operators.coreos.com/subscriptions/my-etcd/cluster-manager\" value_size:2560 >> failure:<request_range:<key:\"/registry/operators.coreos.com/subscriptions/my-etcd/cluster-manager\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T01:26:34.699Z","caller":"traceutil/trace.go:171","msg":"trace[612726848] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"286.37951ms","start":"2021-09-15T01:26:34.413Z","end":"2021-09-15T01:26:34.699Z","steps":["trace[612726848] 'process raft request'  (duration: 156.918875ms)","trace[612726848] 'compare'  (duration: 129.203371ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T01:26:34.702Z","caller":"traceutil/trace.go:171","msg":"trace[1181682201] linearizableReadLoop","detail":"{readStateIndex:1594; appliedIndex:1594; }","duration":"132.041722ms","start":"2021-09-15T01:26:34.570Z","end":"2021-09-15T01:26:34.702Z","steps":["trace[1181682201] 'read index received'  (duration: 132.036075ms)","trace[1181682201] 'applied index is now lower than readState.Index'  (duration: 4.476µs)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T01:26:34.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"143.435067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/operators.coreos.com/clusterserviceversions/my-etcd/cluster-manager.v0.4.0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T01:26:34.702Z","caller":"traceutil/trace.go:171","msg":"trace[2079300295] range","detail":"{range_begin:/registry/operators.coreos.com/clusterserviceversions/my-etcd/cluster-manager.v0.4.0; range_end:; response_count:0; response_revision:1515; }","duration":"143.480457ms","start":"2021-09-15T01:26:34.558Z","end":"2021-09-15T01:26:34.702Z","steps":["trace[2079300295] 'agreement among raft nodes before linearized reading'  (duration: 143.423086ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T01:26:34.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"391.856044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2021-09-15T01:26:34.702Z","caller":"traceutil/trace.go:171","msg":"trace[1835690977] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1515; }","duration":"391.891432ms","start":"2021-09-15T01:26:34.310Z","end":"2021-09-15T01:26:34.702Z","steps":["trace[1835690977] 'agreement among raft nodes before linearized reading'  (duration: 391.831591ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T01:26:34.702Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:34.310Z","time spent":"391.9473ms","remote":"127.0.0.1:41036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1152,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	
	* 
	* ==> kernel <==
	*  01:33:19 up 16 min,  0 users,  load average: 0.32, 2.32, 1.95
	Linux addons-20210915012342-6768 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a591528a3fbd] <==
	* W0915 01:24:56.820200       1 handler_proxy.go:104] no RequestInfo found in the context
	E0915 01:24:56.820255       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0915 01:24:56.820265       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0915 01:24:57.329448       1 controller.go:141] slow openapi aggregation of "operatorgroups.operators.coreos.com": 1.015152211s
	I0915 01:24:57.730283       1 controller.go:611] quota admission added evaluator for: operatorgroups.operators.coreos.com
	I0915 01:24:58.512188       1 controller.go:611] quota admission added evaluator for: clusterserviceversions.operators.coreos.com
	I0915 01:24:58.708908       1 controller.go:611] quota admission added evaluator for: catalogsources.operators.coreos.com
	E0915 01:25:09.737880       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.244.62:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.244.62:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	I0915 01:25:26.811742       1 controller.go:611] quota admission added evaluator for: operatorconditions.operators.coreos.com
	W0915 01:25:29.511486       1 handler_proxy.go:104] no RequestInfo found in the context
	E0915 01:25:29.511539       1 controller.go:116] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0915 01:25:29.511551       1 controller.go:129] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
	E0915 01:25:44.015954       1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.111.224.203:5443/apis/packages.operators.coreos.com/v1: Get "https://10.111.224.203:5443/apis/packages.operators.coreos.com/v1": context deadline exceeded
	I0915 01:25:57.065857       1 trace.go:205] Trace[1724058382]: "List etcd3" key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: (15-Sep-2021 01:25:56.320) (total time: 745ms):
	Trace[1724058382]: [745.326531ms] [745.326531ms] END
	I0915 01:25:57.066512       1 trace.go:205] Trace[950488762]: "List" url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:8da4f27d-db05-44bb-ac47-fd01fd39d0ac,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (15-Sep-2021 01:25:56.320) (total time: 746ms):
	Trace[950488762]: ---"Listing from storage done" 745ms (01:25:57.065)
	Trace[950488762]: [746.016629ms] [746.016629ms] END
	I0915 01:26:28.361555       1 controller.go:611] quota admission added evaluator for: subscriptions.operators.coreos.com
	I0915 01:26:28.959258       1 controller.go:611] quota admission added evaluator for: installplans.operators.coreos.com
	I0915 01:26:55.546095       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 01:27:13.012438       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0915 01:27:15.163184       1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [e3b129b7bcd1] <==
	* I0915 01:26:37.714735       1 event.go:291] "Event occurred" object="default/private-image" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set private-image-7ff9c8c74f to 1"
	I0915 01:26:37.717943       1 event.go:291] "Event occurred" object="default/private-image-7ff9c8c74f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: private-image-7ff9c8c74f-zr6nw"
	I0915 01:26:38.013235       1 event.go:291] "Event occurred" object="my-etcd/cluster-manager" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cluster-manager-794c6cc889 to 3"
	I0915 01:26:38.035505       1 event.go:291] "Event occurred" object="my-etcd/cluster-manager-794c6cc889" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cluster-manager-794c6cc889-ldv66"
	I0915 01:26:38.122535       1 event.go:291] "Event occurred" object="my-etcd/cluster-manager-794c6cc889" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cluster-manager-794c6cc889-4lwmf"
	I0915 01:26:38.122571       1 event.go:291] "Event occurred" object="my-etcd/cluster-manager-794c6cc889" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cluster-manager-794c6cc889-x97lm"
	I0915 01:26:43.310431       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 01:26:43.411414       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 01:26:53.136572       1 event.go:291] "Event occurred" object="default/private-image-eu" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set private-image-eu-5956d58f9f to 1"
	I0915 01:26:53.216808       1 event.go:291] "Event occurred" object="default/private-image-eu-5956d58f9f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: private-image-eu-5956d58f9f-s4zkt"
	I0915 01:26:57.445749       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0915 01:26:59.021896       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") from node "addons-20210915012342-6768" 
	I0915 01:26:59.671466       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") from node "addons-20210915012342-6768" 
	I0915 01:26:59.671610       1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e\" "
	I0915 01:27:06.923132       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0915 01:27:06.926944       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	E0915 01:27:11.719005       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-mr4hf" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	I0915 01:27:17.740313       1 event.go:291] "Event occurred" object="default/hpvc-restore" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0915 01:27:18.002687       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1092d423-15c4-11ec-87b4-0242ac11000d") from node "addons-20210915012342-6768" 
	I0915 01:27:18.565122       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1092d423-15c4-11ec-87b4-0242ac11000d") from node "addons-20210915012342-6768" 
	I0915 01:27:18.565243       1 event.go:291] "Event occurred" object="default/task-pv-pod-restore" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03\" "
	I0915 01:27:21.943567       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") on node "addons-20210915012342-6768" 
	I0915 01:27:21.945423       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") on node "addons-20210915012342-6768" 
	I0915 01:27:22.487535       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") on node "addons-20210915012342-6768" 
	I0915 01:27:37.969584       1 namespace_controller.go:185] Namespace has been deleted ingress-nginx
	
	* 
	* ==> kube-proxy [a51365e4d452] <==
	* I0915 01:24:43.262791       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 01:24:43.262834       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 01:24:43.262853       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 01:24:43.281819       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 01:24:43.281911       1 server_others.go:212] Using iptables Proxier.
	I0915 01:24:43.281924       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 01:24:43.281937       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 01:24:43.282328       1 server.go:649] Version: v1.22.1
	I0915 01:24:43.282908       1 config.go:315] Starting service config controller
	I0915 01:24:43.282931       1 config.go:224] Starting endpoint slice config controller
	I0915 01:24:43.282935       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 01:24:43.282942       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0915 01:24:43.285195       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"addons-20210915012342-6768.16a4da62e4104718", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc04870b6d0dc7294, ext:67064675, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-addons-20210915012342-6768", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"addons-2
0210915012342-6768", UID:"addons-20210915012342-6768", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "addons-20210915012342-6768.16a4da62e4104718" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 01:24:43.383971       1 shared_informer.go:247] Caches are synced for service config 
	I0915 01:24:43.383990       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [f79b2fc97e02] <==
	* E0915 01:24:27.025243       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 01:24:27.027835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 01:24:27.027859       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 01:24:27.027946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 01:24:27.028062       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 01:24:27.028066       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 01:24:27.028123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 01:24:27.028198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 01:24:27.028225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 01:24:27.028300       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 01:24:27.028319       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 01:24:27.028399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 01:24:27.028398       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 01:24:27.028478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 01:24:27.028482       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 01:24:27.918709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 01:24:28.075731       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 01:24:28.129112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 01:24:28.158174       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 01:24:28.189443       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 01:24:30.630161       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 01:24:30.631839       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 01:24:30.631876       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 01:24:30.698551       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0915 01:24:30.824497       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 01:24:10 UTC, end at Wed 2021-09-15 01:33:19 UTC. --
	Sep 15 01:29:25 addons-20210915012342-6768 kubelet[2283]: E0915 01:29:25.877609    2283 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/host-path/016da038-7e39-46c3-9e82-2ac44a0118dd-gcp-creds podName:016da038-7e39-46c3-9e82-2ac44a0118dd nodeName:}" failed. No retries permitted until 2021-09-15 01:31:27.877589675 +0000 UTC m=+417.978605527 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/016da038-7e39-46c3-9e82-2ac44a0118dd-gcp-creds") pod "task-pv-pod-restore" (UID: "016da038-7e39-46c3-9e82-2ac44a0118dd") : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
	Sep 15 01:29:35 addons-20210915012342-6768 kubelet[2283]: I0915 01:29:35.325529    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-x97lm" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:29:38 addons-20210915012342-6768 kubelet[2283]: I0915 01:29:38.325793    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-s4zkt" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:29:41 addons-20210915012342-6768 kubelet[2283]: I0915 01:29:41.325304    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-zr6nw" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:30:22 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:22.327640    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-ldv66" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:30:25 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:25.325134    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:30:41 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:41.325725    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:30:43 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:43.324939    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-zr6nw" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:30:43 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:43.325000    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-4lwmf" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:30:47 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:47.325164    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-x97lm" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:31:00 addons-20210915012342-6768 kubelet[2283]: I0915 01:31:00.325951    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-s4zkt" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:31:27 addons-20210915012342-6768 kubelet[2283]: E0915 01:31:27.923051    2283 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/host-path/016da038-7e39-46c3-9e82-2ac44a0118dd-gcp-creds podName:016da038-7e39-46c3-9e82-2ac44a0118dd nodeName:}" failed. No retries permitted until 2021-09-15 01:33:29.92302634 +0000 UTC m=+540.024042198 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/016da038-7e39-46c3-9e82-2ac44a0118dd-gcp-creds") pod "task-pv-pod-restore" (UID: "016da038-7e39-46c3-9e82-2ac44a0118dd") : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
	Sep 15 01:31:36 addons-20210915012342-6768 kubelet[2283]: E0915 01:31:36.326171    2283 kubelet.go:1720] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-rw7gr gcp-creds task-pv-storage]: timed out waiting for the condition" pod="default/task-pv-pod-restore"
	Sep 15 01:31:36 addons-20210915012342-6768 kubelet[2283]: E0915 01:31:36.326226    2283 pod_workers.go:747] "Error syncing pod, skipping" err="unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-rw7gr gcp-creds task-pv-storage]: timed out waiting for the condition" pod="default/task-pv-pod-restore" podUID=016da038-7e39-46c3-9e82-2ac44a0118dd
	Sep 15 01:31:45 addons-20210915012342-6768 kubelet[2283]: I0915 01:31:45.325418    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-ldv66" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:31:48 addons-20210915012342-6768 kubelet[2283]: I0915 01:31:48.326052    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:31:49 addons-20210915012342-6768 kubelet[2283]: I0915 01:31:49.325086    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-x97lm" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:32:06 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:06.325170    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:32:06 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:06.325263    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-zr6nw" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:32:12 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:12.325039    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-4lwmf" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:32:17 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:17.325897    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-s4zkt" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:32:53 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:53.325941    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-x97lm" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:32:58 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:58.325654    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:33:14 addons-20210915012342-6768 kubelet[2283]: I0915 01:33:14.325869    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-ldv66" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 01:33:17 addons-20210915012342-6768 kubelet[2283]: I0915 01:33:17.324974    2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-zr6nw" secret="" err="secret \"gcp-auth\" not found"
	
	* 
	* ==> storage-provisioner [8d912400cc21] <==
	* I0915 01:24:56.011202       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 01:24:56.111327       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 01:24:56.113121       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 01:24:56.219201       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 01:24:56.219579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210915012342-6768_004bc0cb-d41e-40f2-ba44-6e9643158fa0!
	I0915 01:24:56.229644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f072d67-d6f1-4e15-bf6a-802d085768f6", APIVersion:"v1", ResourceVersion:"859", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210915012342-6768_004bc0cb-d41e-40f2-ba44-6e9643158fa0 became leader
	I0915 01:24:56.521463       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210915012342-6768_004bc0cb-d41e-40f2-ba44-6e9643158fa0!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210915012342-6768 -n addons-20210915012342-6768
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210915012342-6768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: task-pv-pod-restore gcp-auth-certs-create--1-ndlhf gcp-auth-certs-patch--1-krrln 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210915012342-6768 describe pod task-pv-pod-restore gcp-auth-certs-create--1-ndlhf gcp-auth-certs-patch--1-krrln 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210915012342-6768 describe pod task-pv-pod-restore gcp-auth-certs-create--1-ndlhf gcp-auth-certs-patch--1-krrln 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt: exit status 1 (67.309579ms)

                                                
                                                
-- stdout --
	Name:         task-pv-pod-restore
	Namespace:    default
	Priority:     0
	Node:         addons-20210915012342-6768/192.168.49.2
	Start Time:   Wed, 15 Sep 2021 01:27:17 +0000
	Labels:       app=task-pv-pod-restore
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      k8s-minikube
	      GCP_PROJECT:                     k8s-minikube
	      GCLOUD_PROJECT:                  k8s-minikube
	      GOOGLE_CLOUD_PROJECT:            k8s-minikube
	      CLOUDSDK_CORE_PROJECT:           k8s-minikube
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rw7gr (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-rw7gr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                  Age                   From                     Message
	  ----     ------                  ----                  ----                     -------
	  Normal   Scheduled               6m3s                  default-scheduler        Successfully assigned default/task-pv-pod-restore to addons-20210915012342-6768
	  Normal   SuccessfulAttachVolume  6m2s                  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03"
	  Warning  FailedMount             113s (x10 over 6m2s)  kubelet                  MountVolume.SetUp failed for volume "gcp-creds" : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
	  Warning  FailedMount             104s (x2 over 4m)     kubelet                  Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-rw7gr gcp-creds task-pv-storage]: timed out waiting for the condition

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create--1-ndlhf" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch--1-krrln" not found
	Error from server (NotFound): pods "4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210915012342-6768 describe pod task-pv-pod-restore gcp-auth-certs-create--1-ndlhf gcp-auth-certs-patch--1-krrln 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt: exit status 1
--- FAIL: TestAddons/parallel/CSI (383.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (336.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (2.911137384s)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (69.602842ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (77.027521ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (101.559823ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (4.378767538s)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (93.134958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (73.427432ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (68.107933ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (99.051739ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (63.537201ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (65.25867ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
E0915 02:06:15.717900    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:06:27.759009    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 02:06:28.428860    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/no-preload-20210915015352-6768/client.crt: no such file or directory
E0915 02:06:51.185397    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:51.190656    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:51.200858    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:51.221078    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:51.261298    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:51.341607    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:51.502032    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:51.822737    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:52.463622    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:52.935380    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:52.940589    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:52.950818    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:52.971060    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:53.011327    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:53.091611    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:53.251993    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:53.572573    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:53.743832    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:54.213675    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:55.494251    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:06:56.304603    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:06:56.678058    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:06:58.055172    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:07:01.425059    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:07:03.176283    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:07:11.665533    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:07:13.416456    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:07:32.146548    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (70.852822ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
E0915 02:07:33.897582    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:07:50.349495    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/no-preload-20210915015352-6768/client.crt: no such file or directory
E0915 02:08:13.107053    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/auto-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:14.858295    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/default-k8s-different-port-20210915015609-6768/client.crt: no such file or directory
E0915 02:08:18.598726    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:08:19.679791    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:19.685045    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:19.695262    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:19.715520    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:19.755746    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:19.835998    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:19.996369    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:20.316874    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:20.957986    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:22.239049    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:24.799430    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:25.828805    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:25.834058    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:25.844502    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:25.864722    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:25.904954    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:25.985204    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:26.145565    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:26.466073    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:27.106911    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:28.387301    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:29.920072    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:30.947449    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:36.068235    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (63.417538ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
E0915 02:08:40.161207    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:08:46.308752    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:00.641394    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/calico-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:06.789766    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/false-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:14.059545    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:14.064824    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:14.075059    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:14.095296    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:14.135515    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:14.215818    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:14.376169    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:14.696691    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:15.337852    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:16.618469    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
E0915 02:09:19.179248    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/custom-weave-20210915015303-6768/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context cilium-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (64.28322ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get deployments.extensions netcat)

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/cilium/DNS (336.43s)

                                                
                                    

Test pass (260/282)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 12.6
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.06
10 TestDownloadOnly/v1.22.1/json-events 4.87
11 TestDownloadOnly/v1.22.1/preload-exists 0
15 TestDownloadOnly/v1.22.1/LogsDuration 0.06
17 TestDownloadOnly/v1.22.2-rc.0/json-events 4.75
18 TestDownloadOnly/v1.22.2-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.2-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.34
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
25 TestDownloadOnlyKic 3.93
26 TestOffline 65.4
28 TestAddons/Setup 164.76
30 TestAddons/parallel/Registry 27.49
31 TestAddons/parallel/Ingress 39.55
32 TestAddons/parallel/MetricsServer 5.87
33 TestAddons/parallel/HelmTiller 23.63
34 TestAddons/parallel/Olm 41.52
36 TestAddons/parallel/GCPAuth 35.89
37 TestAddons/StoppedEnableDisable 12.35
38 TestCertOptions 43.69
39 TestDockerFlags 28.88
40 TestForceSystemdFlag 38.34
41 TestForceSystemdEnv 50.08
42 TestKVMDriverInstallOrUpdate 2.99
46 TestErrorSpam/setup 21.08
47 TestErrorSpam/start 0.88
48 TestErrorSpam/status 1.08
49 TestErrorSpam/pause 1.26
50 TestErrorSpam/unpause 1.45
51 TestErrorSpam/stop 14.91
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 39.97
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 5.27
58 TestFunctional/serial/KubeContext 0.04
59 TestFunctional/serial/KubectlGetPods 0.17
62 TestFunctional/serial/CacheCmd/cache/add_remote 7.44
63 TestFunctional/serial/CacheCmd/cache/add_local 4.67
64 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
65 TestFunctional/serial/CacheCmd/cache/list 0.05
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
67 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
68 TestFunctional/serial/CacheCmd/cache/delete 0.1
69 TestFunctional/serial/MinikubeKubectlCmd 0.09
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
71 TestFunctional/serial/ExtraConfig 29.13
72 TestFunctional/serial/ComponentHealth 0.06
73 TestFunctional/serial/LogsCmd 1.16
74 TestFunctional/serial/LogsFileCmd 1.16
76 TestFunctional/parallel/ConfigCmd 0.35
77 TestFunctional/parallel/DashboardCmd 3.44
78 TestFunctional/parallel/DryRun 0.57
79 TestFunctional/parallel/InternationalLanguage 0.24
80 TestFunctional/parallel/StatusCmd 1.37
83 TestFunctional/parallel/ServiceCmd 19.88
84 TestFunctional/parallel/AddonsCmd 0.16
85 TestFunctional/parallel/PersistentVolumeClaim 32.62
87 TestFunctional/parallel/SSHCmd 0.79
88 TestFunctional/parallel/CpCmd 0.68
89 TestFunctional/parallel/MySQL 21.1
90 TestFunctional/parallel/FileSync 0.35
91 TestFunctional/parallel/CertSync 2.23
95 TestFunctional/parallel/NodeLabels 0.07
96 TestFunctional/parallel/LoadImage 1.54
97 TestFunctional/parallel/SaveImage 1.98
98 TestFunctional/parallel/RemoveImage 1.91
99 TestFunctional/parallel/LoadImageFromFile 5.58
100 TestFunctional/parallel/SaveImageToFile 2.17
101 TestFunctional/parallel/BuildImage 2.64
102 TestFunctional/parallel/ListImages 0.28
103 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
105 TestFunctional/parallel/Version/short 0.06
106 TestFunctional/parallel/Version/components 0.58
107 TestFunctional/parallel/DockerEnv/bash 1.36
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.3
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
122 TestFunctional/parallel/ProfileCmd/profile_list 0.41
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
124 TestFunctional/parallel/MountCmd/any-port 6.15
125 TestFunctional/parallel/MountCmd/specific-port 2.2
126 TestFunctional/delete_busybox_image 0.09
127 TestFunctional/delete_my-image_image 0.04
128 TestFunctional/delete_minikube_cached_images 0.04
132 TestJSONOutput/start/Command 42.07
133 TestJSONOutput/start/Audit 0
135 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
136 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
138 TestJSONOutput/pause/Command 0.54
139 TestJSONOutput/pause/Audit 0
141 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/unpause/Command 0.55
145 TestJSONOutput/unpause/Audit 0
147 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/stop/Command 11.07
151 TestJSONOutput/stop/Audit 0
153 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
155 TestErrorJSONOutput 0.32
157 TestKicCustomNetwork/create_custom_network 23.19
158 TestKicCustomNetwork/use_default_bridge_network 23.54
159 TestKicExistingNetwork 25.14
160 TestMainNoArgs 0.05
163 TestMultiNode/serial/FreshStart2Nodes 74.71
164 TestMultiNode/serial/DeployApp2Nodes 4.85
165 TestMultiNode/serial/PingHostFrom2Pods 0.82
166 TestMultiNode/serial/AddNode 44.34
167 TestMultiNode/serial/ProfileList 0.36
168 TestMultiNode/serial/CopyFile 2.78
169 TestMultiNode/serial/StopNode 2.6
170 TestMultiNode/serial/StartAfterStop 24.69
171 TestMultiNode/serial/RestartKeepsNodes 120.58
172 TestMultiNode/serial/DeleteNode 5.49
173 TestMultiNode/serial/StopMultiNode 21.83
174 TestMultiNode/serial/RestartMultiNode 59.42
175 TestMultiNode/serial/ValidateNameConflict 24.35
181 TestDebPackageInstall/install_amd64_debian_sid/minikube 0
182 TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver 12.04
184 TestDebPackageInstall/install_amd64_debian_latest/minikube 0
185 TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver 9.88
187 TestDebPackageInstall/install_amd64_debian_10/minikube 0
188 TestDebPackageInstall/install_amd64_debian_10/kvm2-driver 9.42
190 TestDebPackageInstall/install_amd64_debian_9/minikube 0
191 TestDebPackageInstall/install_amd64_debian_9/kvm2-driver 8.13
193 TestDebPackageInstall/install_amd64_ubuntu_latest/minikube 0
194 TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver 14.17
196 TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube 0
197 TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver 13.26
199 TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube 0
200 TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver 13.74
202 TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube 0
203 TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver 13.19
204 TestPreload 116.02
206 TestScheduledStopUnix 61.47
207 TestSkaffold 66.83
209 TestInsufficientStorage 9.26
210 TestRunningBinaryUpgrade 79.8
212 TestKubernetesUpgrade 103.7
213 TestMissingContainerUpgrade 108.25
215 TestPause/serial/Start 63.84
216 TestStoppedBinaryUpgrade/Upgrade 119.34
217 TestPause/serial/SecondStartNoReconfiguration 8.94
225 TestPause/serial/Pause 0.7
226 TestPause/serial/VerifyStatus 0.41
227 TestPause/serial/Unpause 0.68
228 TestPause/serial/PauseAgain 0.94
229 TestPause/serial/DeletePaused 2.8
230 TestPause/serial/VerifyDeletedResources 16.34
231 TestStoppedBinaryUpgrade/MinikubeLogs 1.53
244 TestStartStop/group/old-k8s-version/serial/FirstStart 110.26
246 TestStartStop/group/no-preload/serial/FirstStart 71.8
248 TestStartStop/group/embed-certs/serial/FirstStart 54.32
249 TestStartStop/group/embed-certs/serial/DeployApp 9.52
251 TestStartStop/group/newest-cni/serial/FirstStart 36.87
252 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.69
253 TestStartStop/group/embed-certs/serial/Stop 11.25
254 TestStartStop/group/no-preload/serial/DeployApp 8.51
255 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
256 TestStartStop/group/embed-certs/serial/SecondStart 341.01
257 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.69
258 TestStartStop/group/no-preload/serial/Stop 12.01
259 TestStartStop/group/newest-cni/serial/DeployApp 0
260 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 5.03
261 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 3.49
262 TestStartStop/group/no-preload/serial/SecondStart 368.29
263 TestStartStop/group/newest-cni/serial/Stop 11.14
264 TestStartStop/group/old-k8s-version/serial/DeployApp 8.4
265 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
266 TestStartStop/group/newest-cni/serial/SecondStart 21.39
267 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.8
268 TestStartStop/group/old-k8s-version/serial/Stop 11.08
269 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
270 TestStartStop/group/old-k8s-version/serial/SecondStart 352.37
271 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
272 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
273 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
274 TestStartStop/group/newest-cni/serial/Pause 2.83
276 TestStartStop/group/default-k8s-different-port/serial/FirstStart 43.65
277 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.42
278 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.74
279 TestStartStop/group/default-k8s-different-port/serial/Stop 11.2
280 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
281 TestStartStop/group/default-k8s-different-port/serial/SecondStart 346.58
282 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
283 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
284 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
285 TestStartStop/group/embed-certs/serial/Pause 2.97
286 TestNetworkPlugins/group/auto/Start 43.8
287 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.3
288 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
289 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
290 TestNetworkPlugins/group/auto/KubeletFlags 0.36
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
292 TestNetworkPlugins/group/auto/NetCatPod 11.2
293 TestStartStop/group/no-preload/serial/Pause 3.24
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.21
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
296 TestStartStop/group/old-k8s-version/serial/Pause 3.18
297 TestNetworkPlugins/group/false/Start 87.09
298 TestNetworkPlugins/group/auto/DNS 0.18
299 TestNetworkPlugins/group/auto/Localhost 0.19
300 TestNetworkPlugins/group/auto/HairPin 5.19
301 TestNetworkPlugins/group/cilium/Start 86.71
302 TestNetworkPlugins/group/calico/Start 68.82
303 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
304 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.11
305 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.43
306 TestStartStop/group/default-k8s-different-port/serial/Pause 3.48
307 TestNetworkPlugins/group/custom-weave/Start 54.99
308 TestNetworkPlugins/group/calico/ControllerPod 5.02
309 TestNetworkPlugins/group/calico/KubeletFlags 0.39
310 TestNetworkPlugins/group/calico/NetCatPod 10.44
311 TestNetworkPlugins/group/false/KubeletFlags 0.41
312 TestNetworkPlugins/group/false/NetCatPod 8.28
313 TestNetworkPlugins/group/cilium/ControllerPod 5.9
314 TestNetworkPlugins/group/false/DNS 0.17
315 TestNetworkPlugins/group/false/Localhost 0.16
316 TestNetworkPlugins/group/false/HairPin 5.16
317 TestNetworkPlugins/group/calico/DNS 0.19
318 TestNetworkPlugins/group/calico/Localhost 1.83
319 TestNetworkPlugins/group/cilium/KubeletFlags 0.44
320 TestNetworkPlugins/group/calico/HairPin 0.19
321 TestNetworkPlugins/group/cilium/NetCatPod 9.34
322 TestNetworkPlugins/group/enable-default-cni/Start 49.55
323 TestNetworkPlugins/group/kindnet/Start 63.13
325 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.37
326 TestNetworkPlugins/group/custom-weave/NetCatPod 9.35
327 TestNetworkPlugins/group/bridge/Start 43.48
328 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
329 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
333 TestNetworkPlugins/group/kubenet/Start 46.94
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
335 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
336 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
337 TestNetworkPlugins/group/kindnet/DNS 0.2
338 TestNetworkPlugins/group/kindnet/Localhost 0.21
339 TestNetworkPlugins/group/kindnet/HairPin 0.18
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
341 TestNetworkPlugins/group/bridge/NetCatPod 9.24
342 TestNetworkPlugins/group/bridge/DNS 0.15
343 TestNetworkPlugins/group/bridge/Localhost 0.17
344 TestNetworkPlugins/group/bridge/HairPin 0.17
345 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
346 TestNetworkPlugins/group/kubenet/NetCatPod 10.28
347 TestNetworkPlugins/group/kubenet/DNS 0.16
348 TestNetworkPlugins/group/kubenet/Localhost 0.15
349 TestNetworkPlugins/group/kubenet/HairPin 0.15
x
+
TestDownloadOnly/v1.14.0/json-events (12.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210915012315-6768 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210915012315-6768 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.597795541s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (12.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210915012315-6768
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210915012315-6768: exit status 85 (59.668172ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 01:23:15
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.17 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 01:23:15.927695    6780 out.go:298] Setting OutFile to fd 1 ...
	I0915 01:23:15.927870    6780 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:23:15.927881    6780 out.go:311] Setting ErrFile to fd 2...
	I0915 01:23:15.927887    6780 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:23:15.927983    6780 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/bin
	W0915 01:23:15.928121    6780 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/config/config.json: no such file or directory
	I0915 01:23:15.928379    6780 out.go:305] Setting JSON to true
	I0915 01:23:15.963003    6780 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":359,"bootTime":1631668637,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0915 01:23:15.963069    6780 start.go:121] virtualization: kvm guest
	I0915 01:23:15.965778    6780 notify.go:169] Checking for updates...
	I0915 01:23:15.967516    6780 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 01:23:16.008013    6780 docker.go:132] docker version: linux-19.03.15
	I0915 01:23:16.008098    6780 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:23:16.342972    6780 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:182 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-09-15 01:23:16.039580314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0915 01:23:16.343051    6780 docker.go:237] overlay module found
	I0915 01:23:16.345042    6780 start.go:278] selected driver: docker
	I0915 01:23:16.345056    6780 start.go:751] validating driver "docker" against <nil>
	I0915 01:23:16.345495    6780 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:23:16.420563    6780 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:182 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-09-15 01:23:16.376953226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0915 01:23:16.420664    6780 start_flags.go:264] no existing cluster config was found, will generate one from the flags 
	I0915 01:23:16.421150    6780 start_flags.go:345] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0915 01:23:16.421247    6780 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 01:23:16.421270    6780 cni.go:93] Creating CNI manager for ""
	I0915 01:23:16.421277    6780 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 01:23:16.421290    6780 start_flags.go:278] config:
	{Name:download-only-20210915012315-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210915012315-6768 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:23:16.423216    6780 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 01:23:16.424674    6780 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 01:23:16.424776    6780 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 01:23:16.466523    6780 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0915 01:23:16.466557    6780 cache.go:57] Caching tarball of preloaded images
	I0915 01:23:16.466792    6780 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 01:23:16.468872    6780 preload.go:237] getting checksum for preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:23:16.504933    6780 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 01:23:16.505102    6780 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
	I0915 01:23:16.505164    6780 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 01:23:16.529734    6780 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4?checksum=md5:f9e1bc5997daac3e4aca6f6bb5ce5b14 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0915 01:23:19.414570    6780 preload.go:247] saving checksum for preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:23:19.414655    6780 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:23:20.422807    6780 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0915 01:23:20.423063    6780 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/download-only-20210915012315-6768/config.json ...
	I0915 01:23:20.423097    6780 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/download-only-20210915012315-6768/config.json: {Name:mk0118ba527abe54b6232ca15292678c61bbc1b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:23:20.423261    6780 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 01:23:20.423454    6780 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/linux/v1.14.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915012315-6768"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/json-events (4.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210915012315-6768 --force --alsologtostderr --kubernetes-version=v1.22.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210915012315-6768 --force --alsologtostderr --kubernetes-version=v1.22.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.866072563s)
--- PASS: TestDownloadOnly/v1.22.1/json-events (4.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/preload-exists
--- PASS: TestDownloadOnly/v1.22.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210915012315-6768
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210915012315-6768: exit status 85 (62.691741ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 01:23:28
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.17 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915012315-6768"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/json-events (4.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210915012315-6768 --force --alsologtostderr --kubernetes-version=v1.22.2-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210915012315-6768 --force --alsologtostderr --kubernetes-version=v1.22.2-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.752010987s)
--- PASS: TestDownloadOnly/v1.22.2-rc.0/json-events (4.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.2-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210915012315-6768
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210915012315-6768: exit status 85 (59.088624ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 01:23:33
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.17 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915012315-6768"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.2-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210915012315-6768
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.93s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:227: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210915012339-6768 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:227: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210915012339-6768 --force --alsologtostderr --driver=docker  --container-runtime=docker: (2.597595501s)
helpers_test.go:176: Cleaning up "download-docker-20210915012339-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210915012339-6768
--- PASS: TestDownloadOnlyKic (3.93s)

                                                
                                    
x
+
TestOffline (65.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20210915015059-6768 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20210915015059-6768 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m2.607508865s)
helpers_test.go:176: Cleaning up "offline-docker-20210915015059-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20210915015059-6768

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20210915015059-6768: (2.793709846s)
--- PASS: TestOffline (65.40s)

                                                
                                    
x
+
TestAddons/Setup (164.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20210915012342-6768 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker  --container-runtime=docker --addons=ingress --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p addons-20210915012342-6768 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker  --container-runtime=docker --addons=ingress --addons=helm-tiller: (2m20.8828836s)
addons_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 addons enable gcp-auth
addons_test.go:89: (dbg) Done: out/minikube-linux-amd64 -p addons-20210915012342-6768 addons enable gcp-auth: (14.033139241s)
addons_test.go:99: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 addons enable gcp-auth --force
addons_test.go:99: (dbg) Done: out/minikube-linux-amd64 -p addons-20210915012342-6768 addons enable gcp-auth --force: (9.837807839s)
--- PASS: TestAddons/Setup (164.76s)

                                                
                                    
x
+
TestAddons/parallel/Registry (27.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:253: registry stabilized in 12.836698ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:255: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-d2wk4" [89a45b74-b58c-468a-8c45-d173530c049f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:255: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014451378s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:258: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-vhhfv" [985239f4-f991-4990-bf20-39effa769ac7] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:258: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.020226872s
addons_test.go:263: (dbg) Run:  kubectl --context addons-20210915012342-6768 delete po -l run=registry-test --now
addons_test.go:268: (dbg) Run:  kubectl --context addons-20210915012342-6768 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:268: (dbg) Done: kubectl --context addons-20210915012342-6768 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (16.325911077s)
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 ip
2021/09/15 01:26:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (27.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (39.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:170: (dbg) Run:  kubectl --context addons-20210915012342-6768 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Run:  kubectl --context addons-20210915012342-6768 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:190: (dbg) Run:  kubectl --context addons-20210915012342-6768 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:195: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [98de15ff-e3fe-4dab-95b2-3952200eb0d1] Pending
helpers_test.go:343: "nginx" [98de15ff-e3fe-4dab-95b2-3952200eb0d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [98de15ff-e3fe-4dab-95b2-3952200eb0d1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:195: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00594342s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:234: (dbg) Done: out/minikube-linux-amd64 -p addons-20210915012342-6768 addons disable ingress --alsologtostderr -v=1: (28.678372508s)
--- PASS: TestAddons/parallel/Ingress (39.55s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:330: metrics-server stabilized in 14.234525ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:332: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-wpjcb" [47810a13-c9ae-42d6-a4b8-981ff0c391d9] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:332: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013918643s
addons_test.go:338: (dbg) Run:  kubectl --context addons-20210915012342-6768 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (23.63s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:379: tiller-deploy stabilized in 2.28227ms
addons_test.go:381: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-7d9fb5c894-gqw79" [7780487d-b571-401f-a059-bb6ed78f19c1] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:381: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007371888s
addons_test.go:396: (dbg) Run:  kubectl --context addons-20210915012342-6768 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:396: (dbg) Done: kubectl --context addons-20210915012342-6768 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (18.241083835s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (23.63s)

                                                
                                    
x
+
TestAddons/parallel/Olm (41.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:425: (dbg) Run:  kubectl --context addons-20210915012342-6768 wait --for=condition=ready --namespace=olm pod --selector=app=catalog-operator --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:428: catalog-operator stabilized in 245.452954ms
addons_test.go:430: (dbg) Run:  kubectl --context addons-20210915012342-6768 wait --for=condition=ready --namespace=olm pod --selector=app=olm-operator --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:433: olm-operator stabilized in 312.463859ms
addons_test.go:435: (dbg) Run:  kubectl --context addons-20210915012342-6768 wait --for=condition=ready --namespace=olm pod --selector=app=packageserver --timeout=90s
addons_test.go:438: packageserver stabilized in 377.305445ms
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210915012342-6768 wait --for=condition=ready --namespace=olm pod --selector=olm.catalogSource=operatorhubio-catalog --timeout=90s
addons_test.go:443: operatorhubio-catalog stabilized in 436.165383ms
addons_test.go:446: (dbg) Run:  kubectl --context addons-20210915012342-6768 create -f testdata/etcd.yaml
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915012342-6768 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915012342-6768 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915012342-6768 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915012342-6768 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915012342-6768 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915012342-6768 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915012342-6768 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915012342-6768 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915012342-6768 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (41.52s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (35.89s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:576: (dbg) Run:  kubectl --context addons-20210915012342-6768 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:582: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [00458419-8eca-46db-8691-48616a89f6ea] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [00458419-8eca-46db-8691-48616a89f6ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [00458419-8eca-46db-8691-48616a89f6ea] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:582: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 9.004654735s
addons_test.go:588: (dbg) Run:  kubectl --context addons-20210915012342-6768 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:625: (dbg) Run:  kubectl --context addons-20210915012342-6768 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:641: (dbg) Run:  kubectl --context addons-20210915012342-6768 apply -f testdata/private-image.yaml
addons_test.go:648: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-zr6nw" [4529323c-6dd4-40f0-b886-b731b07927af] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-zr6nw" [4529323c-6dd4-40f0-b886-b731b07927af] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:648: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 15.08098332s
addons_test.go:654: (dbg) Run:  kubectl --context addons-20210915012342-6768 apply -f testdata/private-image-eu.yaml
addons_test.go:661: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-s4zkt" [00aac74d-9571-4a47-8858-703f8e8f1cfe] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-s4zkt" [00aac74d-9571-4a47-8858-703f8e8f1cfe] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:661: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 10.011913023s
addons_test.go:667: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210915012342-6768 addons disable gcp-auth --alsologtostderr -v=1
--- PASS: TestAddons/parallel/GCPAuth (35.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20210915012342-6768
addons_test.go:140: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20210915012342-6768: (12.169282223s)
addons_test.go:144: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20210915012342-6768
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20210915012342-6768
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (43.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210915015309-6768 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210915015309-6768 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (39.72582296s)
cert_options_test.go:59: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210915015309-6768 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:74: (dbg) Run:  kubectl --context cert-options-20210915015309-6768 config view
helpers_test.go:176: Cleaning up "cert-options-20210915015309-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210915015309-6768

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210915015309-6768: (3.349403161s)
--- PASS: TestCertOptions (43.69s)

                                                
                                    
x
+
TestDockerFlags (28.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20210915015234-6768 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20210915015234-6768 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.319884346s)
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20210915015234-6768 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20210915015234-6768 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20210915015234-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20210915015234-6768

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20210915015234-6768: (2.726607385s)
--- PASS: TestDockerFlags (28.88s)

                                                
                                    
x
+
TestForceSystemdFlag (38.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210915015302-6768 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210915015302-6768 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.773358565s)
docker_test.go:103: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210915015302-6768 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210915015302-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210915015302-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210915015302-6768: (3.064102776s)
--- PASS: TestForceSystemdFlag (38.34s)

                                                
                                    
x
+
TestForceSystemdEnv (50.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210915015059-6768 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210915015059-6768 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (46.847426146s)
docker_test.go:103: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210915015059-6768 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20210915015059-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210915015059-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210915015059-6768: (2.78573657s)
--- PASS: TestForceSystemdEnv (50.08s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.99s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.99s)

                                                
                                    
x
+
TestErrorSpam/setup (21.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210915013335-6768 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210915013335-6768 --driver=docker  --container-runtime=docker
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210915013335-6768 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210915013335-6768 --driver=docker  --container-runtime=docker: (21.07765448s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.1."
--- PASS: TestErrorSpam/setup (21.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 start --dry-run
--- PASS: TestErrorSpam/start (0.88s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 pause
--- PASS: TestErrorSpam/pause (1.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 unpause
--- PASS: TestErrorSpam/unpause (1.45s)

                                                
                                    
x
+
TestErrorSpam/stop (14.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 stop
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 stop: (14.651876838s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210915013335-6768 --log_dir /tmp/nospam-20210915013335-6768 stop
--- PASS: TestErrorSpam/stop (14.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1726: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/files/etc/test/nested/copy/6768/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2102: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210915013418-6768 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2102: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210915013418-6768 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (39.973649916s)
--- PASS: TestFunctional/serial/StartWithProxy (39.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:747: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210915013418-6768 --alsologtostderr -v=8
functional_test.go:747: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210915013418-6768 --alsologtostderr -v=8: (5.267696972s)
functional_test.go:751: soft start took 5.268271098s for "functional-20210915013418-6768" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:767: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:780: (dbg) Run:  kubectl --context functional-20210915013418-6768 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 cache add k8s.gcr.io/pause:3.1
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 cache add k8s.gcr.io/pause:3.3
functional_test.go:1102: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 cache add k8s.gcr.io/pause:3.3: (5.279463757s)
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 cache add k8s.gcr.io/pause:latest
functional_test.go:1102: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 cache add k8s.gcr.io/pause:latest: (1.596511073s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1132: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210915013418-6768 /tmp/functional-20210915013418-676840412020
functional_test.go:1132: (dbg) Done: docker build -t minikube-local-cache-test:functional-20210915013418-6768 /tmp/functional-20210915013418-676840412020: (3.832136694s)
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 cache add minikube-local-cache-test:functional-20210915013418-6768
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 cache delete minikube-local-cache-test:functional-20210915013418-6768
functional_test.go:1138: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210915013418-6768
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1176: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1198: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1204: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (329.298615ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1209: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 cache reload
functional_test.go:1209: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 cache reload: (1.088775249s)
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1223: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1223: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:798: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 kubectl -- --context functional-20210915013418-6768 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:821: (dbg) Run:  out/kubectl --context functional-20210915013418-6768 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:835: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210915013418-6768 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:835: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210915013418-6768 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.13336595s)
functional_test.go:839: restart took 29.133477381s for "functional-20210915013418-6768" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:886: (dbg) Run:  kubectl --context functional-20210915013418-6768 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:900: etcd phase: Running
functional_test.go:910: etcd status: Ready
functional_test.go:900: kube-apiserver phase: Running
functional_test.go:910: kube-apiserver status: Ready
functional_test.go:900: kube-controller-manager phase: Running
functional_test.go:910: kube-controller-manager status: Ready
functional_test.go:900: kube-scheduler phase: Running
functional_test.go:910: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 logs
functional_test.go:1285: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 logs: (1.162953759s)
--- PASS: TestFunctional/serial/LogsCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1301: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 logs --file /tmp/functional-20210915013418-67684220298160/logs.txt
functional_test.go:1301: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 logs --file /tmp/functional-20210915013418-67684220298160/logs.txt: (1.154951959s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 config get cpus
functional_test.go:1249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210915013418-6768 config get cpus: exit status 14 (54.102704ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1249: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 config set cpus 2
functional_test.go:1249: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 config get cpus
functional_test.go:1249: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 config get cpus
functional_test.go:1249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210915013418-6768 config get cpus: exit status 14 (54.322538ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:977: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210915013418-6768 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:982: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210915013418-6768 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 52296: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:1039: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210915013418-6768 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:1039: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210915013418-6768 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (238.82088ms)

                                                
                                                
-- stdout --
	* [functional-20210915013418-6768] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube
	  - MINIKUBE_LOCATION=12425
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 01:36:14.604205   51475 out.go:298] Setting OutFile to fd 1 ...
	I0915 01:36:14.604431   51475 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:36:14.604442   51475 out.go:311] Setting ErrFile to fd 2...
	I0915 01:36:14.604448   51475 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:36:14.604585   51475 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/bin
	I0915 01:36:14.604822   51475 out.go:305] Setting JSON to false
	I0915 01:36:14.643372   51475 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":1137,"bootTime":1631668637,"procs":238,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0915 01:36:14.643457   51475 start.go:121] virtualization: kvm guest
	I0915 01:36:14.645835   51475 out.go:177] * [functional-20210915013418-6768] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
	I0915 01:36:14.647720   51475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	I0915 01:36:14.648992   51475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 01:36:14.650481   51475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube
	I0915 01:36:14.652719   51475 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 01:36:14.653118   51475 config.go:177] Loaded profile config "functional-20210915013418-6768": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 01:36:14.653453   51475 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 01:36:14.703622   51475 docker.go:132] docker version: linux-19.03.15
	I0915 01:36:14.703716   51475 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:36:14.784981   51475 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:183 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-09-15 01:36:14.740640718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0915 01:36:14.785066   51475 docker.go:237] overlay module found
	I0915 01:36:14.786651   51475 out.go:177] * Using the docker driver based on existing profile
	I0915 01:36:14.786675   51475 start.go:278] selected driver: docker
	I0915 01:36:14.786681   51475 start.go:751] validating driver "docker" against &{Name:functional-20210915013418-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915013418-6768 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provis
ioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:36:14.786778   51475 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0915 01:36:14.786810   51475 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0915 01:36:14.786831   51475 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0915 01:36:14.788134   51475 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0915 01:36:14.789910   51475 out.go:177] 
	W0915 01:36:14.790007   51475 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 01:36:14.791182   51475 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:1054: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210915013418-6768 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210915013418-6768 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1076: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210915013418-6768 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (242.412902ms)

                                                
                                                
-- stdout --
	* [functional-20210915013418-6768] minikube v1.23.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube
	  - MINIKUBE_LOCATION=12425
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 01:36:14.728871   51512 out.go:298] Setting OutFile to fd 1 ...
	I0915 01:36:14.728972   51512 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:36:14.728976   51512 out.go:311] Setting ErrFile to fd 2...
	I0915 01:36:14.728980   51512 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:36:14.729104   51512 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/bin
	I0915 01:36:14.729298   51512 out.go:305] Setting JSON to false
	I0915 01:36:14.768316   51512 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":1137,"bootTime":1631668637,"procs":238,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0915 01:36:14.768431   51512 start.go:121] virtualization: kvm guest
	I0915 01:36:14.770637   51512 out.go:177] * [functional-20210915013418-6768] minikube v1.23.0 sur Debian 9.13 (kvm/amd64)
	I0915 01:36:14.772163   51512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	I0915 01:36:14.773450   51512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 01:36:14.774780   51512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube
	I0915 01:36:14.776086   51512 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 01:36:14.776467   51512 config.go:177] Loaded profile config "functional-20210915013418-6768": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 01:36:14.776815   51512 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 01:36:14.828853   51512 docker.go:132] docker version: linux-19.03.15
	I0915 01:36:14.828952   51512 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:36:14.909285   51512 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:183 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-09-15 01:36:14.864941065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0915 01:36:14.909384   51512 docker.go:237] overlay module found
	I0915 01:36:14.911174   51512 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 01:36:14.911198   51512 start.go:278] selected driver: docker
	I0915 01:36:14.911203   51512 start.go:751] validating driver "docker" against &{Name:functional-20210915013418-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915013418-6768 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provis
ioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:36:14.911302   51512 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0915 01:36:14.911330   51512 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0915 01:36:14.911347   51512 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0915 01:36:14.912626   51512 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0915 01:36:14.914935   51512 out.go:177] 
	W0915 01:36:14.915048   51512 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 01:36:14.916511   51512 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 status
functional_test.go:935: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:946: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (19.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1477: (dbg) Run:  kubectl --context functional-20210915013418-6768 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1483: (dbg) Run:  kubectl --context functional-20210915013418-6768 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1488: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-29dtj" [b5b79825-990d-4b07-bdd4-da8e25e6f41b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-29dtj" [b5b79825-990d-4b07-bdd4-da8e25e6f41b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1488: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 17.005839821s
functional_test.go:1492: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1492: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 service list: (1.40353356s)
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 service --namespace=default --https --url hello-node
2021/09/15 01:36:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1514: found endpoint: https://192.168.49.2:31304
functional_test.go:1525: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1534: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1540: found endpoint for hello-node: http://192.168.49.2:31304
functional_test.go:1551: Attempting to fetch http://192.168.49.2:31304 ...
functional_test.go:1570: http://192.168.49.2:31304: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-29dtj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31304
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (19.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1585: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1596: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [2acf293d-fdf1-416e-9671-68d44eabf899] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007178687s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20210915013418-6768 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20210915013418-6768 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20210915013418-6768 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20210915013418-6768 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [3289c3cc-9c1a-4bbc-b342-253264cc6713] Pending
helpers_test.go:343: "sp-pod" [3289c3cc-9c1a-4bbc-b342-253264cc6713] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [3289c3cc-9c1a-4bbc-b342-253264cc6713] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.007957014s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20210915013418-6768 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20210915013418-6768 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20210915013418-6768 delete -f testdata/storage-provisioner/pod.yaml: (1.53657574s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20210915013418-6768 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [b9cc9688-5476-4367-a393-589ab20b295b] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [b9cc9688-5476-4367-a393-589ab20b295b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [b9cc9688-5476-4367-a393-589ab20b295b] Running
E0915 01:36:27.758712    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:36:27.764312    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:36:27.774543    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:36:27.794779    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:36:27.835003    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:36:27.915301    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:36:28.075674    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007657513s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20210915013418-6768 exec sp-pod -- ls /tmp/mount
E0915 01:36:28.396492    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1618: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1635: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1666: (dbg) Run:  kubectl --context functional-20210915013418-6768 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1671: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-9bbbc5bbb-j4xx4" [829e72dd-7a82-4540-9a20-46eb9be81586] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-j4xx4" [829e72dd-7a82-4540-9a20-46eb9be81586] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-j4xx4" [829e72dd-7a82-4540-9a20-46eb9be81586] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1671: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.01423343s
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915013418-6768 exec mysql-9bbbc5bbb-j4xx4 -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915013418-6768 exec mysql-9bbbc5bbb-j4xx4 -- mysql -ppassword -e "show databases;": exit status 1 (385.932508ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915013418-6768 exec mysql-9bbbc5bbb-j4xx4 -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915013418-6768 exec mysql-9bbbc5bbb-j4xx4 -- mysql -ppassword -e "show databases;": exit status 1 (135.067261ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915013418-6768 exec mysql-9bbbc5bbb-j4xx4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1798: Checking for existence of /etc/test/nested/copy/6768/hosts within VM
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo cat /etc/test/nested/copy/6768/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1804: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1839: Checking for existence of /etc/ssl/certs/6768.pem within VM
functional_test.go:1840: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo cat /etc/ssl/certs/6768.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1839: Checking for existence of /usr/share/ca-certificates/6768.pem within VM
functional_test.go:1840: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo cat /usr/share/ca-certificates/6768.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1839: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1840: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1866: Checking for existence of /etc/ssl/certs/67682.pem within VM
functional_test.go:1867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo cat /etc/ssl/certs/67682.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1866: Checking for existence of /usr/share/ca-certificates/67682.pem within VM
functional_test.go:1867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo cat /usr/share/ca-certificates/67682.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1866: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-20210915013418-6768 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:241: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:248: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210915013418-6768
functional_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image load --daemon docker.io/library/busybox:load-functional-20210915013418-6768

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:470: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210915013418-6768 -- docker image inspect docker.io/library/busybox:load-functional-20210915013418-6768
--- PASS: TestFunctional/parallel/LoadImage (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/SaveImage (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/SaveImage
=== PAUSE TestFunctional/parallel/SaveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image pull docker.io/library/busybox:1.29

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 image pull docker.io/library/busybox:1.29: (1.07351881s)
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image tag docker.io/library/busybox:1.29 docker.io/library/busybox:save-functional-20210915013418-6768

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image save --daemon docker.io/library/busybox:save-functional-20210915013418-6768

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:400: (dbg) Run:  docker images busybox
--- PASS: TestFunctional/parallel/SaveImage (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:333: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:340: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210915013418-6768
functional_test.go:346: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image load docker.io/library/busybox:remove-functional-20210915013418-6768

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:352: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image rm docker.io/library/busybox:remove-functional-20210915013418-6768
functional_test.go:484: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210915013418-6768 -- docker images
--- PASS: TestFunctional/parallel/RemoveImage (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (5.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:281: (dbg) Run:  docker pull busybox:1.31

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:281: (dbg) Done: docker pull busybox:1.31: (3.719070492s)
functional_test.go:288: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210915013418-6768
functional_test.go:295: (dbg) Run:  docker save -o busybox-load.tar docker.io/library/busybox:load-from-file-functional-20210915013418-6768

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image load /home/jenkins/workspace/Docker_Linux_integration/busybox-load.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 image load /home/jenkins/workspace/Docker_Linux_integration/busybox-load.tar: (1.238548199s)
functional_test.go:484: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210915013418-6768 -- docker images
--- PASS: TestFunctional/parallel/LoadImageFromFile (5.58s)

                                                
                                    
x
+
TestFunctional/parallel/SaveImageToFile (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/SaveImageToFile
=== PAUSE TestFunctional/parallel/SaveImageToFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:421: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image pull docker.io/library/busybox:1.30

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:421: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 image pull docker.io/library/busybox:1.30: (1.229846111s)
functional_test.go:429: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image tag docker.io/library/busybox:1.30 docker.io/library/busybox:save-to-file-functional-20210915013418-6768

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image save docker.io/library/busybox:save-to-file-functional-20210915013418-6768 /home/jenkins/workspace/Docker_Linux_integration/busybox-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:446: (dbg) Run:  docker load -i /home/jenkins/workspace/Docker_Linux_integration/busybox-save.tar
functional_test.go:453: (dbg) Run:  docker images busybox
--- PASS: TestFunctional/parallel/SaveImageToFile (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image build -t localhost/my-image:functional-20210915013418-6768 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:504: (dbg) Done: out/minikube-linux-amd64 -p functional-20210915013418-6768 image build -t localhost/my-image:functional-20210915013418-6768 testdata/build: (2.278917519s)
functional_test.go:509: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210915013418-6768 image build -t localhost/my-image:functional-20210915013418-6768 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
24fb2886d6f6: Pulling fs layer
24fb2886d6f6: Verifying Checksum
24fb2886d6f6: Download complete
24fb2886d6f6: Pull complete
Digest: sha256:52f73a0a43a16cf37cd0720c90887ce972fe60ee06a687ee71fb93a7ca601df7
Status: Downloaded newer image for busybox:latest
---> 16ea53ea7c65
Step 2/3 : RUN true
---> Running in 50e63507a8b0
Removing intermediate container 50e63507a8b0
---> c277bfa46769
Step 3/3 : ADD content.txt /
---> c6671decf89b
Successfully built c6671decf89b
Successfully tagged localhost/my-image:functional-20210915013418-6768
functional_test.go:470: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210915013418-6768 -- docker image inspect localhost/my-image:functional-20210915013418-6768
--- PASS: TestFunctional/parallel/BuildImage (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:538: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:543: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210915013418-6768 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.5
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210915013418-6768
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
--- PASS: TestFunctional/parallel/ListImages (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1894: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo systemctl is-active crio"
functional_test.go:1894: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo systemctl is-active crio": exit status 1 (359.003497ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2123: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:601: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20210915013418-6768 docker-env) && out/minikube-linux-amd64 status -p functional-20210915013418-6768"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:622: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20210915013418-6768 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1985: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1985: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1985: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210915013418-6768 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20210915013418-6768 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [dccfe81d-599b-4432-ac84-841950fbf302] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [dccfe81d-599b-4432-ac84-841950fbf302] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.006105041s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20210915013418-6768 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.101.128.82 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210915013418-6768 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1326: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1365: Took "361.701331ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1379: Took "51.194686ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1410: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1415: Took "359.535088ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1423: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1428: Took "54.339661ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:77: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210915013418-6768 /tmp/mounttest2805150276:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:111: wrote "test-1631669774918577304" to /tmp/mounttest2805150276/created-by-test
functional_test_mount_test.go:111: wrote "test-1631669774918577304" to /tmp/mounttest2805150276/created-by-test-removed-by-pod
functional_test_mount_test.go:111: wrote "test-1631669774918577304" to /tmp/mounttest2805150276/test-1631669774918577304
functional_test_mount_test.go:119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:119: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (368.685922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh -- ls -la /mount-9p
functional_test_mount_test.go:137: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 15 01:36 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 15 01:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 15 01:36 test-1631669774918577304
functional_test_mount_test.go:141: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh cat /mount-9p/test-1631669774918577304

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:152: (dbg) Run:  kubectl --context functional-20210915013418-6768 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:157: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [ca67a0e9-2242-4163-b037-fc78893d5f06] Pending
helpers_test.go:343: "busybox-mount" [ca67a0e9-2242-4163-b037-fc78893d5f06] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [ca67a0e9-2242-4163-b037-fc78893d5f06] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:157: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.006396083s
functional_test_mount_test.go:173: (dbg) Run:  kubectl --context functional-20210915013418-6768 logs busybox-mount
functional_test_mount_test.go:185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:94: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:98: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210915013418-6768 /tmp/mounttest2805150276:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:226: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210915013418-6768 /tmp/mounttest1973204068:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.360586ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:270: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh -- ls -la /mount-9p
functional_test_mount_test.go:274: guest mount directory contents
total 0
functional_test_mount_test.go:276: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210915013418-6768 /tmp/mounttest1973204068:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:277: reading mount text
functional_test_mount_test.go:291: done reading mount text
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh "sudo umount -f /mount-9p": exit status 1 (323.46531ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:245: "out/minikube-linux-amd64 -p functional-20210915013418-6768 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:247: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210915013418-6768 /tmp/mounttest1973204068:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:186: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210915013418-6768
functional_test.go:191: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210915013418-6768
--- PASS: TestFunctional/delete_busybox_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210915013418-6768
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210915013418-6768
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20210915013631-6768 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0915 01:36:32.878630    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:36:37.999410    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:36:48.239943    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:37:08.720765    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20210915013631-6768 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (42.065697916s)
--- PASS: TestJSONOutput/start/Command (42.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20210915013631-6768 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20210915013631-6768 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20210915013631-6768 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20210915013631-6768 --output=json --user=testUser: (11.071369442s)
--- PASS: TestJSONOutput/stop/Command (11.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210915013727-6768 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210915013727-6768 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.945949ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"31d3ba1b-f9ae-496e-a722-1bb84928f9d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20210915013727-6768] minikube v1.23.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6db243f6-27c4-492b-9555-29e52ae2750e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig"}}
	{"specversion":"1.0","id":"cd1f325c-5a67-49c0-8d38-b0b3365908c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4d9b6704-11ee-4f04-a7b0-844fcf64958c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube"}}
	{"specversion":"1.0","id":"f307e7c1-0be8-4ab9-b38e-28459e186ffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12425"}}
	{"specversion":"1.0","id":"d223ad67-888b-4f5f-8326-8016b799ffe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210915013727-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210915013727-6768
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210915013728-6768 --network=
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210915013728-6768 --network=: (20.66784437s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210915013728-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210915013728-6768
E0915 01:37:49.681771    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210915013728-6768: (2.48426975s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.19s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210915013751-6768 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210915013751-6768 --network=bridge: (21.200561636s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210915013751-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210915013751-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210915013751-6768: (2.298386663s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.54s)

                                                
                                    
x
+
TestKicExistingNetwork (25.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210915013815-6768 --network=existing-network
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210915013815-6768 --network=existing-network: (22.39206244s)
helpers_test.go:176: Cleaning up "existing-network-20210915013815-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210915013815-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210915013815-6768: (2.505960268s)
--- PASS: TestKicExistingNetwork (25.14s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210915013840-6768 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0915 01:39:11.601917    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
multinode_test.go:82: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210915013840-6768 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m14.146886121s)
multinode_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:468: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- rollout status deployment/busybox
multinode_test.go:468: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- rollout status deployment/busybox: (2.890251968s)
multinode_test.go:474: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:494: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-b7hzq -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-jnvtz -- nslookup kubernetes.io
multinode_test.go:504: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-b7hzq -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-jnvtz -- nslookup kubernetes.default
multinode_test.go:512: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-b7hzq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-jnvtz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:530: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-b7hzq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-b7hzq -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:530: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-jnvtz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210915013840-6768 -- exec busybox-84b6686758-jnvtz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210915013840-6768 -v 3 --alsologtostderr
multinode_test.go:107: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210915013840-6768 -v 3 --alsologtostderr: (43.575197231s)
multinode_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.34s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 cp testdata/cp-test.txt multinode-20210915013840-6768-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 ssh -n multinode-20210915013840-6768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 cp testdata/cp-test.txt multinode-20210915013840-6768-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 ssh -n multinode-20210915013840-6768-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 node stop m03
multinode_test.go:192: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210915013840-6768 node stop m03: (1.310425046s)
multinode_test.go:198: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210915013840-6768 status: exit status 7 (607.716239ms)

                                                
                                                
-- stdout --
	multinode-20210915013840-6768
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210915013840-6768-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210915013840-6768-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:205: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --alsologtostderr: exit status 7 (680.530811ms)

                                                
                                                
-- stdout --
	multinode-20210915013840-6768
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210915013840-6768-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210915013840-6768-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 01:40:49.885663   88255 out.go:298] Setting OutFile to fd 1 ...
	I0915 01:40:49.885737   88255 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:40:49.885753   88255 out.go:311] Setting ErrFile to fd 2...
	I0915 01:40:49.885759   88255 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:40:49.885861   88255 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/bin
	I0915 01:40:49.885993   88255 out.go:305] Setting JSON to false
	I0915 01:40:49.886009   88255 mustload.go:65] Loading cluster: multinode-20210915013840-6768
	I0915 01:40:49.886265   88255 config.go:177] Loaded profile config "multinode-20210915013840-6768": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 01:40:49.886276   88255 status.go:253] checking status of multinode-20210915013840-6768 ...
	I0915 01:40:49.886610   88255 cli_runner.go:115] Run: docker container inspect multinode-20210915013840-6768 --format={{.State.Status}}
	I0915 01:40:49.924555   88255 status.go:328] multinode-20210915013840-6768 host status = "Running" (err=<nil>)
	I0915 01:40:49.924575   88255 host.go:66] Checking if "multinode-20210915013840-6768" exists ...
	I0915 01:40:49.924859   88255 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210915013840-6768
	I0915 01:40:49.960655   88255 host.go:66] Checking if "multinode-20210915013840-6768" exists ...
	I0915 01:40:49.960885   88255 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 01:40:49.960946   88255 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210915013840-6768
	I0915 01:40:49.997503   88255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/multinode-20210915013840-6768/id_rsa Username:docker}
	I0915 01:40:50.071987   88255 ssh_runner.go:152] Run: systemctl --version
	I0915 01:40:50.075257   88255 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 01:40:50.083274   88255 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:40:50.160232   88255 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:183 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:45 SystemTime:2021-09-15 01:40:50.118583179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0915 01:40:50.161039   88255 kubeconfig.go:93] found "multinode-20210915013840-6768" server: "https://192.168.49.2:8443"
	I0915 01:40:50.161063   88255 api_server.go:164] Checking apiserver status ...
	I0915 01:40:50.161095   88255 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 01:40:50.179582   88255 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/1894/cgroup
	I0915 01:40:50.186258   88255 api_server.go:180] apiserver freezer: "4:freezer:/docker/3da264223f8b936909dc0be62b78520a696ecb6a2ed5801fbf98ee0f02f84737/kubepods/burstable/pod2a59d191a6fe2330843d0e563e466129/22661180f3f3ba94f33c463db877e3c6ae4f7e826b68f8a7c3edb274df5a50ee"
	I0915 01:40:50.186317   88255 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/3da264223f8b936909dc0be62b78520a696ecb6a2ed5801fbf98ee0f02f84737/kubepods/burstable/pod2a59d191a6fe2330843d0e563e466129/22661180f3f3ba94f33c463db877e3c6ae4f7e826b68f8a7c3edb274df5a50ee/freezer.state
	I0915 01:40:50.192003   88255 api_server.go:202] freezer state: "THAWED"
	I0915 01:40:50.192040   88255 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 01:40:50.197033   88255 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 01:40:50.197055   88255 status.go:419] multinode-20210915013840-6768 apiserver status = Running (err=<nil>)
	I0915 01:40:50.197067   88255 status.go:255] multinode-20210915013840-6768 status: &{Name:multinode-20210915013840-6768 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 01:40:50.197089   88255 status.go:253] checking status of multinode-20210915013840-6768-m02 ...
	I0915 01:40:50.197378   88255 cli_runner.go:115] Run: docker container inspect multinode-20210915013840-6768-m02 --format={{.State.Status}}
	I0915 01:40:50.234563   88255 status.go:328] multinode-20210915013840-6768-m02 host status = "Running" (err=<nil>)
	I0915 01:40:50.234582   88255 host.go:66] Checking if "multinode-20210915013840-6768-m02" exists ...
	I0915 01:40:50.234859   88255 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210915013840-6768-m02
	I0915 01:40:50.270885   88255 host.go:66] Checking if "multinode-20210915013840-6768-m02" exists ...
	I0915 01:40:50.271158   88255 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 01:40:50.271194   88255 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210915013840-6768-m02
	I0915 01:40:50.307903   88255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/multinode-20210915013840-6768-m02/id_rsa Username:docker}
	I0915 01:40:50.472186   88255 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 01:40:50.480636   88255 status.go:255] multinode-20210915013840-6768-m02 status: &{Name:multinode-20210915013840-6768-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 01:40:50.480664   88255 status.go:253] checking status of multinode-20210915013840-6768-m03 ...
	I0915 01:40:50.480931   88255 cli_runner.go:115] Run: docker container inspect multinode-20210915013840-6768-m03 --format={{.State.Status}}
	I0915 01:40:50.519538   88255 status.go:328] multinode-20210915013840-6768-m03 host status = "Stopped" (err=<nil>)
	I0915 01:40:50.519559   88255 status.go:341] host is not running, skipping remaining checks
	I0915 01:40:50.519571   88255 status.go:255] multinode-20210915013840-6768-m03 status: &{Name:multinode-20210915013840-6768-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.60s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:226: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 node start m03 --alsologtostderr
E0915 01:40:53.819267    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:53.824528    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:53.834746    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:53.854975    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:53.895217    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:53.975509    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:54.135848    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:54.456417    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:55.097280    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:56.377984    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:40:58.938976    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:41:04.059487    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:41:14.300496    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
multinode_test.go:236: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210915013840-6768 node start m03 --alsologtostderr: (23.82919158s)
multinode_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status
multinode_test.go:257: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210915013840-6768
multinode_test.go:272: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210915013840-6768
E0915 01:41:27.759587    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:41:34.781662    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
multinode_test.go:272: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210915013840-6768: (23.17567821s)
multinode_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210915013840-6768 --wait=true -v=8 --alsologtostderr
E0915 01:41:55.442604    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
E0915 01:42:15.741915    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
multinode_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210915013840-6768 --wait=true -v=8 --alsologtostderr: (1m37.305648052s)
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210915013840-6768
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 node delete m03
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210915013840-6768 node delete m03: (4.783804947s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  docker volume ls
multinode_test.go:406: (dbg) Run:  kubectl get nodes
multinode_test.go:414: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 stop
E0915 01:43:37.663260    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
multinode_test.go:296: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210915013840-6768 stop: (21.586720124s)
multinode_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210915013840-6768 status: exit status 7 (122.212614ms)

                                                
                                                
-- stdout --
	multinode-20210915013840-6768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210915013840-6768-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --alsologtostderr: exit status 7 (118.249066ms)

                                                
                                                
-- stdout --
	multinode-20210915013840-6768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210915013840-6768-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 01:43:43.037436  103899 out.go:298] Setting OutFile to fd 1 ...
	I0915 01:43:43.037517  103899 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:43:43.037533  103899 out.go:311] Setting ErrFile to fd 2...
	I0915 01:43:43.037539  103899 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:43:43.037642  103899 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/bin
	I0915 01:43:43.037776  103899 out.go:305] Setting JSON to false
	I0915 01:43:43.037791  103899 mustload.go:65] Loading cluster: multinode-20210915013840-6768
	I0915 01:43:43.038072  103899 config.go:177] Loaded profile config "multinode-20210915013840-6768": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 01:43:43.038085  103899 status.go:253] checking status of multinode-20210915013840-6768 ...
	I0915 01:43:43.038406  103899 cli_runner.go:115] Run: docker container inspect multinode-20210915013840-6768 --format={{.State.Status}}
	I0915 01:43:43.074580  103899 status.go:328] multinode-20210915013840-6768 host status = "Stopped" (err=<nil>)
	I0915 01:43:43.074599  103899 status.go:341] host is not running, skipping remaining checks
	I0915 01:43:43.074605  103899 status.go:255] multinode-20210915013840-6768 status: &{Name:multinode-20210915013840-6768 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 01:43:43.074635  103899 status.go:253] checking status of multinode-20210915013840-6768-m02 ...
	I0915 01:43:43.074855  103899 cli_runner.go:115] Run: docker container inspect multinode-20210915013840-6768-m02 --format={{.State.Status}}
	I0915 01:43:43.110031  103899 status.go:328] multinode-20210915013840-6768-m02 host status = "Stopped" (err=<nil>)
	I0915 01:43:43.110051  103899 status.go:341] host is not running, skipping remaining checks
	I0915 01:43:43.110060  103899 status.go:255] multinode-20210915013840-6768-m02 status: &{Name:multinode-20210915013840-6768-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:326: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210915013840-6768 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210915013840-6768 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.676545021s)
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210915013840-6768 status --alsologtostderr
multinode_test.go:356: (dbg) Run:  kubectl get nodes
multinode_test.go:364: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210915013840-6768
multinode_test.go:434: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210915013840-6768-m02 --driver=docker  --container-runtime=docker
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210915013840-6768-m02 --driver=docker  --container-runtime=docker: exit status 14 (100.023998ms)

                                                
                                                
-- stdout --
	* [multinode-20210915013840-6768-m02] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube
	  - MINIKUBE_LOCATION=12425
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210915013840-6768-m02' is duplicated with machine name 'multinode-20210915013840-6768-m02' in profile 'multinode-20210915013840-6768'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210915013840-6768-m03 --driver=docker  --container-runtime=docker
multinode_test.go:442: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210915013840-6768-m03 --driver=docker  --container-runtime=docker: (21.316029804s)
multinode_test.go:449: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210915013840-6768
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210915013840-6768: exit status 80 (337.404669ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210915013840-6768
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210915013840-6768-m03 already exists in multinode-20210915013840-6768-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210915013840-6768-m03
multinode_test.go:454: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210915013840-6768-m03: (2.544495637s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.35s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver (12.04s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver
pkg_install_test.go:106: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb"
pkg_install_test.go:106: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb": (12.035549298s)
--- PASS: TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver (12.04s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver (9.88s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver
pkg_install_test.go:106: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb"
pkg_install_test.go:106: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb": (9.87946594s)
--- PASS: TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver (9.88s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_10/kvm2-driver (9.42s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_10/kvm2-driver
pkg_install_test.go:106: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb"
pkg_install_test.go:106: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb": (9.418301491s)
--- PASS: TestDebPackageInstall/install_amd64_debian_10/kvm2-driver (9.42s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_9/kvm2-driver (8.13s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_9/kvm2-driver
pkg_install_test.go:106: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb"
pkg_install_test.go:106: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb": (8.125120491s)
--- PASS: TestDebPackageInstall/install_amd64_debian_9/kvm2-driver (8.13s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver (14.17s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver
pkg_install_test.go:106: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb"
E0915 01:45:53.818268    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
pkg_install_test.go:106: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb": (14.170565784s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver (14.17s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver (13.26s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver
pkg_install_test.go:106: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb"
pkg_install_test.go:106: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb": (13.259362444s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver (13.26s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver (13.74s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver
pkg_install_test.go:106: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb"
E0915 01:46:21.504264    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:46:27.759378    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
pkg_install_test.go:106: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb": (13.742268169s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver (13.74s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver (13.19s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver
pkg_install_test.go:106: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb"
pkg_install_test.go:106: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.23.0-0_amd64.deb": (13.187530243s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver (13.19s)

                                                
                                    
x
+
TestPreload (116.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210915014645-6768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210915014645-6768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m17.447682466s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210915014645-6768 -- docker pull busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210915014645-6768 -- docker pull busybox: (1.154452531s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210915014645-6768 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210915014645-6768 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (33.298710661s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210915014645-6768 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20210915014645-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210915014645-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210915014645-6768: (3.770071911s)
--- PASS: TestPreload (116.02s)

                                                
                                    
x
+
TestScheduledStopUnix (61.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210915014841-6768 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210915014841-6768 --memory=2048 --driver=docker  --container-runtime=docker: (20.880901054s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210915014841-6768 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210915014841-6768 -n scheduled-stop-20210915014841-6768
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210915014841-6768 --schedule 8s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210915014841-6768 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210915014841-6768 -n scheduled-stop-20210915014841-6768
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210915014841-6768
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210915014841-6768 --schedule 5s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210915014841-6768
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210915014841-6768: exit status 7 (88.555188ms)

                                                
                                                
-- stdout --
	scheduled-stop-20210915014841-6768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210915014841-6768 -n scheduled-stop-20210915014841-6768
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210915014841-6768 -n scheduled-stop-20210915014841-6768: exit status 7 (84.481588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210915014841-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210915014841-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210915014841-6768: (1.951267983s)
--- PASS: TestScheduledStopUnix (61.47s)

                                                
                                    
x
+
TestSkaffold (66.83s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:58: (dbg) Run:  /tmp/skaffold.exe1527796737 version
skaffold_test.go:62: skaffold version: v1.31.0
skaffold_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20210915014943-6768 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20210915014943-6768 --memory=2600 --driver=docker  --container-runtime=docker: (20.51846257s)
skaffold_test.go:85: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:109: (dbg) Run:  /tmp/skaffold.exe1527796737 run --minikube-profile skaffold-20210915014943-6768 --kube-context skaffold-20210915014943-6768 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:109: (dbg) Done: /tmp/skaffold.exe1527796737 run --minikube-profile skaffold-20210915014943-6768 --kube-context skaffold-20210915014943-6768 --status-check=true --port-forward=false --interactive=false: (33.010710046s)
skaffold_test.go:115: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-6dfb585db6-fjjnt" [4e9c4463-4b16-4f73-a1f4-0dd31fc1b499] Running
skaffold_test.go:115: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011537889s
skaffold_test.go:118: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-5c85b57fb6-d7pg2" [b0467970-46d0-469a-903b-42bc554a6513] Running
skaffold_test.go:118: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005404127s
helpers_test.go:176: Cleaning up "skaffold-20210915014943-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20210915014943-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20210915014943-6768: (2.737608997s)
--- PASS: TestSkaffold (66.83s)

                                                
                                    
x
+
TestInsufficientStorage (9.26s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210915015049-6768 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0915 01:50:53.819468    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210915015049-6768 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (6.563197467s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9855ecc0-7bf6-42d2-9161-3a5da093ef1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20210915015049-6768] minikube v1.23.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f00c6e4f-94ea-4876-9c47-21d2e1eb0028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig"}}
	{"specversion":"1.0","id":"3b13322e-4311-4aa3-ac8c-ef82a05ad1c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d0a11c8c-aa57-4226-9777-b92f920ebd55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube"}}
	{"specversion":"1.0","id":"bacdb208-b9fa-4bdc-a8e7-b647bdbfd5b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12425"}}
	{"specversion":"1.0","id":"fa1e3355-19bf-4cf0-90bc-87218eba5855","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e1dc73d2-5f3c-43b0-b123-790df442feb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a32a34cf-8019-446f-966e-25b00f6c9daa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"c71a1aad-9c60-4f62-b672-160bfd84d257","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"573b1fbf-5bc1-4eae-9723-b62928ee4efe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210915015049-6768 in cluster insufficient-storage-20210915015049-6768","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9d788f2-c12f-401e-9a97-e071eb78043f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"82437fb3-f618-498d-8f91-ffaf0c31d5f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b63ff3e-e4ab-4a53-9a31-435fad2e3192","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210915015049-6768 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210915015049-6768 --output=json --layout=cluster: exit status 7 (348.508748ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210915015049-6768","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210915015049-6768","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 01:50:56.796491  162582 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210915015049-6768" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210915015049-6768 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210915015049-6768 --output=json --layout=cluster: exit status 7 (329.437403ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210915015049-6768","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210915015049-6768","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 01:50:57.126045  162683 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210915015049-6768" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	E0915 01:50:57.137679  162683 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/insufficient-storage-20210915015049-6768/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210915015049-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210915015049-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210915015049-6768: (2.021235266s)
--- PASS: TestInsufficientStorage (9.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.249549796.exe start -p running-upgrade-20210915015149-6768 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.249549796.exe start -p running-upgrade-20210915015149-6768 --memory=2200 --vm-driver=docker  --container-runtime=docker: (49.096470645s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210915015149-6768 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0915 01:52:50.803576    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210915015149-6768 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.779453266s)
helpers_test.go:176: Cleaning up "running-upgrade-20210915015149-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210915015149-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210915015149-6768: (2.558406676s)
--- PASS: TestRunningBinaryUpgrade (79.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (103.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210915015303-6768 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:226: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210915015303-6768 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (50.619524979s)
version_upgrade_test.go:231: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210915015303-6768
version_upgrade_test.go:231: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210915015303-6768: (2.075760527s)
version_upgrade_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210915015303-6768 status --format={{.Host}}
version_upgrade_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210915015303-6768 status --format={{.Host}}: exit status 7 (104.13655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:238: status error: exit status 7 (may be ok)
version_upgrade_test.go:247: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210915015303-6768 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:247: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210915015303-6768 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.477207555s)
version_upgrade_test.go:252: (dbg) Run:  kubectl --context kubernetes-upgrade-20210915015303-6768 version --output=json
version_upgrade_test.go:271: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:273: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210915015303-6768 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:273: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210915015303-6768 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=docker: exit status 106 (103.122742ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210915015303-6768] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube
	  - MINIKUBE_LOCATION=12425
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.2-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210915015303-6768
	    minikube start -p kubernetes-upgrade-20210915015303-6768 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210915015303-67682 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.2-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210915015303-6768 --kubernetes-version=v1.22.2-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:277: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:279: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210915015303-6768 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:279: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210915015303-6768 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (16.061312325s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210915015303-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210915015303-6768
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210915015303-6768: (3.188650054s)
--- PASS: TestKubernetesUpgrade (103.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (108.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:313: (dbg) Run:  /tmp/minikube-v1.9.1.1907708610.exe start -p missing-upgrade-20210915015204-6768 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:313: (dbg) Done: /tmp/minikube-v1.9.1.1907708610.exe start -p missing-upgrade-20210915015204-6768 --memory=2200 --driver=docker  --container-runtime=docker: (46.575637802s)
version_upgrade_test.go:322: (dbg) Run:  docker stop missing-upgrade-20210915015204-6768
version_upgrade_test.go:322: (dbg) Done: docker stop missing-upgrade-20210915015204-6768: (1.982783091s)
version_upgrade_test.go:327: (dbg) Run:  docker rm missing-upgrade-20210915015204-6768
version_upgrade_test.go:333: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210915015204-6768 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:333: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210915015204-6768 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (56.106012591s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210915015204-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210915015204-6768

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210915015204-6768: (3.15016536s)
--- PASS: TestMissingContainerUpgrade (108.25s)

                                                
                                    
x
+
TestPause/serial/Start (63.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210915015059-6768 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210915015059-6768 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m3.838574813s)
--- PASS: TestPause/serial/Start (63.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:187: (dbg) Run:  /tmp/minikube-v1.9.0.340552201.exe start -p stopped-upgrade-20210915015059-6768 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0915 01:51:27.759242    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:187: (dbg) Done: /tmp/minikube-v1.9.0.340552201.exe start -p stopped-upgrade-20210915015059-6768 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m14.04922218s)
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.340552201.exe -p stopped-upgrade-20210915015059-6768 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.340552201.exe -p stopped-upgrade-20210915015059-6768 stop: (12.672727735s)
version_upgrade_test.go:202: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20210915015059-6768 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:202: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20210915015059-6768 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.615378716s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210915015059-6768 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210915015059-6768 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (8.911810651s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.94s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210915015059-6768 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210915015059-6768 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210915015059-6768 --output=json --layout=cluster: exit status 2 (407.356152ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210915015059-6768","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 13 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210915015059-6768","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210915015059-6768 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210915015059-6768 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210915015059-6768 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210915015059-6768 --alsologtostderr -v=5: (2.797267227s)
--- PASS: TestPause/serial/DeletePaused (2.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.240208631s)
pause_test.go:166: (dbg) Run:  docker ps -a
pause_test.go:171: (dbg) Run:  docker volume inspect pause-20210915015059-6768
pause_test.go:171: (dbg) Non-zero exit: docker volume inspect pause-20210915015059-6768: exit status 1 (46.417576ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210915015059-6768

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (16.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210915015059-6768

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20210915015059-6768: (1.531255734s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (110.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210915015344-6768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210915015344-6768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0: (1m50.262656194s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (110.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210915015352-6768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.2-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210915015352-6768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.2-rc.0: (1m11.804659553s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210915015352-6768 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210915015352-6768 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.1: (54.323024838s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20210915015352-6768 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [80a0ee3c-6f6d-487a-9c07-bec438676715] Pending

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [80a0ee3c-6f6d-487a-9c07-bec438676715] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [80a0ee3c-6f6d-487a-9c07-bec438676715] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.011049452s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20210915015352-6768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210915015447-6768 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.2-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210915015447-6768 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.2-rc.0: (36.873362949s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210915015352-6768 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20210915015352-6768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210915015352-6768 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210915015352-6768 --alsologtostderr -v=3: (11.247809194s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20210915015352-6768 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [7d3ed40d-bf75-415c-bc56-e3d053e2a997] Pending
helpers_test.go:343: "busybox" [7d3ed40d-bf75-415c-bc56-e3d053e2a997] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [7d3ed40d-bf75-415c-bc56-e3d053e2a997] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.010491703s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20210915015352-6768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768: exit status 7 (100.094147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210915015352-6768 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (341.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210915015352-6768 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210915015352-6768 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.1: (5m40.569948638s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (341.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210915015352-6768 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20210915015352-6768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210915015352-6768 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210915015352-6768 --alsologtostderr -v=3: (12.00899898s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210915015447-6768 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210915015447-6768 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.032115726s)
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768: exit status 7 (437.815756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210915015352-6768 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210915015352-6768 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.047251267s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (3.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (368.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210915015352-6768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.2-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210915015352-6768 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.2-rc.0: (6m7.744891364s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (368.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210915015447-6768 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210915015447-6768 --alsologtostderr -v=3: (11.13704186s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20210915015344-6768 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [04100f08-15c8-11ec-98ca-0242358cf028] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0915 01:55:37.133016    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:37.138274    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:37.148491    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:37.168719    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:37.208945    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:37.289294    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:37.449585    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
helpers_test.go:343: "busybox" [04100f08-15c8-11ec-98ca-0242358cf028] Running
E0915 01:55:37.769874    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:38.410803    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:39.691861    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.011171772s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20210915015344-6768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768: exit status 7 (111.889829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210915015447-6768 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210915015447-6768 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.2-rc.0
E0915 01:55:42.252400    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210915015447-6768 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.2-rc.0: (20.968050083s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210915015344-6768 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20210915015344-6768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210915015344-6768 --alsologtostderr -v=3
E0915 01:55:47.373360    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:55:53.817368    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210915015344-6768 --alsologtostderr -v=3: (11.083811717s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768: exit status 7 (102.756605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210915015344-6768 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (352.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210915015344-6768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0
E0915 01:55:57.613771    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210915015344-6768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0: (5m51.913393945s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (352.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210915015447-6768 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210915015447-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768: exit status 2 (369.227127ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768: exit status 2 (373.179616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210915015447-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210915015447-6768 -n newest-cni-20210915015447-6768
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (43.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210915015609-6768 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.1
E0915 01:56:18.094606    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 01:56:27.758770    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210915015609-6768 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.1: (43.654050896s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (43.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20210915015609-6768 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [e3b246a2-20f2-49f2-b864-a2cad79aa5a4] Pending
helpers_test.go:343: "busybox" [e3b246a2-20f2-49f2-b864-a2cad79aa5a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [e3b246a2-20f2-49f2-b864-a2cad79aa5a4] Running
E0915 01:56:59.055769    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.012149599s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20210915015609-6768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210915015609-6768 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20210915015609-6768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (11.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210915015609-6768 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210915015609-6768 --alsologtostderr -v=3: (11.201452258s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (11.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768: exit status 7 (92.993475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210915015609-6768 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (346.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210915015609-6768 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.1
E0915 01:57:16.865380    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
E0915 01:58:20.976839    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 02:00:37.133717    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210915015609-6768 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.1: (5m46.01209006s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (346.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-zqc2g" [d0f6992a-9da7-4a1f-8307-73788b55e8c4] Running
E0915 02:00:53.818297    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/functional-20210915013418-6768/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011633928s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-zqc2g" [d0f6992a-9da7-4a1f-8307-73788b55e8c4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006835428s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210915015352-6768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210915015352-6768 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210915015352-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768: exit status 2 (386.84524ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768: exit status 2 (378.164488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20210915015352-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210915015352-6768 -n embed-certs-20210915015352-6768
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker
E0915 02:01:27.759459    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (43.803307211s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-qdzxk" [c16093ef-6049-404d-99f3-2a9abe9564c6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-qdzxk" [c16093ef-6049-404d-99f3-2a9abe9564c6] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.296182724s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-qdzxk" [c16093ef-6049-404d-99f3-2a9abe9564c6] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006877418s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20210915015352-6768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-9qpq7" [3cce2ae3-15c8-11ec-bcab-02428bf0740a] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012908062s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210915015352-6768 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-b5xrx" [3cd094c7-42ed-4b64-83f1-003c121a9616] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-b5xrx" [3cd094c7-42ed-4b64-83f1-003c121a9616] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005010412s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210915015352-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768: exit status 2 (389.585069ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768: exit status 2 (422.26989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20210915015352-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210915015352-6768 -n no-preload-20210915015352-6768
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-9qpq7" [3cce2ae3-15c8-11ec-bcab-02428bf0740a] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006052009s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210915015344-6768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210915015344-6768 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210915015344-6768 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768: exit status 2 (407.208089ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768: exit status 2 (389.211284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20210915015344-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210915015344-6768 -n old-k8s-version-20210915015344-6768
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (87.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p false-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (1m27.088288348s)
--- PASS: TestNetworkPlugins/group/false/Start (87.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Non-zero exit: kubectl --context auto-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.187786483s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (86.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m26.706586291s)
--- PASS: TestNetworkPlugins/group/cilium/Start (86.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: (1m8.824402927s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-n8n9r" [422afad4-2208-4edd-aba2-12f07986b347] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01385207s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-n8n9r" [422afad4-2208-4edd-aba2-12f07986b347] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00809678s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210915015609-6768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210915015609-6768 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210915015609-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768: exit status 2 (459.121751ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768: exit status 2 (475.275474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210915015609-6768 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210915015609-6768 -n default-k8s-different-port-20210915015609-6768
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.48s)
E0915 02:05:09.065417    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/no-preload-20210915015352-6768/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (54.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: (54.987055607s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (54.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-z7mp2" [81f9d164-e3fd-47e5-91be-98a2e83fc2b7] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.014415754s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context calico-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-qftp8" [8e2a0496-3df9-48a0-b43c-e4f923509f3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-qftp8" [8e2a0496-3df9-48a0-b43c-e4f923509f3c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.111812533s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-vlmtr" [0543cc8c-5034-4627-9279-1c636f6df066] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-vlmtr" [0543cc8c-5034-4627-9279-1c636f6df066] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 8.005785137s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-64gvd" [5537c8de-84ba-46e1-a394-5f514296e095] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.897373456s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.161587499s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Run:  kubectl --context calico-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (1.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:182: (dbg) Run:  kubectl --context calico-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Localhost
net_test.go:182: (dbg) Done: kubectl --context calico-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080": (1.833100718s)
--- PASS: TestNetworkPlugins/group/calico/Localhost (1.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:232: (dbg) Run:  kubectl --context calico-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-rssdh" [ee9fb42c-ec22-400a-bc4e-c476b29e7a87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-rssdh" [ee9fb42c-ec22-400a-bc4e-c476b29e7a87] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.006566948s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (49.5513837s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (1m3.130817284s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-xl2xd" [59847628-22b6-4cb7-8d6c-8296b63c3ea7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-xl2xd" [59847628-22b6-4cb7-8d6c-8296b63c3ea7] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 9.040711023s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (43.48084912s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-x92n6" [9d854626-f81e-4ea1-a7a9-2985bd20521e] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-x92n6" [9d854626-f81e-4ea1-a7a9-2985bd20521e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-x92n6" [9d854626-f81e-4ea1-a7a9-2985bd20521e] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006100472s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:182: (dbg) Run:  kubectl --context enable-default-cni-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:232: (dbg) Run:  kubectl --context enable-default-cni-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (46.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20210915015303-6768 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (46.938289702s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (46.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-rzj5b" [bc9e5603-d86b-46c4-87f6-4fa572f40726] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013921575s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-ncd9d" [78dbe7d1-0897-479c-a059-9af8c43e7679] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-ncd9d" [78dbe7d1-0897-479c-a059-9af8c43e7679] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.006517975s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kindnet-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kindnet-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-cx7tz" [f83d025e-68a8-458f-a374-b93b3d09d306] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 02:05:11.625605    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/no-preload-20210915015352-6768/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-cx7tz" [f83d025e-68a8-458f-a374-b93b3d09d306] Running
E0915 02:05:16.746665    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/no-preload-20210915015352-6768/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.010470275s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:182: (dbg) Run:  kubectl --context bridge-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:232: (dbg) Run:  kubectl --context bridge-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20210915015303-6768 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20210915015303-6768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-hncpn" [b1b5b90f-0a98-4408-9afd-cf6d9c29c465] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 02:05:34.756808    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:34.762044    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:34.772279    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:34.792535    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:34.832770    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:34.913129    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:35.073413    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:35.393972    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:36.034958    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-hncpn" [b1b5b90f-0a98-4408-9afd-cf6d9c29c465] Running
E0915 02:05:37.133355    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/skaffold-20210915014943-6768/client.crt: no such file or directory
E0915 02:05:37.315713    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
E0915 02:05:39.876308    6768 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/old-k8s-version-20210915015344-6768/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.006355246s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20210915015303-6768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kubenet-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kubenet-20210915015303-6768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    

Test skip (20/282)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.2-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.2-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.2-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:647: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:44: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210915015447-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210915015447-6768
--- SKIP: TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210915015303-6768" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20210915015303-6768
--- SKIP: TestNetworkPlugins/group/flannel (0.29s)

                                                
                                    
Copied to clipboard