=== RUN TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI
=== CONT TestAddons/parallel/CSI
addons_test.go:484: csi-hostpath-driver pods stabilized in 5.694251ms
addons_test.go:487: (dbg) Run: kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:492: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run: kubectl --context addons-20210915012342-6768 get pvc hpvc -o jsonpath={.status.phase} -n default
=== CONT TestAddons/parallel/CSI
helpers_test.go:393: (dbg) Run: kubectl --context addons-20210915012342-6768 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:497: (dbg) Run: kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:502: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [18bf7079-ae54-4543-b0e1-f228daae1947] Pending
helpers_test.go:343: "task-pv-pod" [18bf7079-ae54-4543-b0e1-f228daae1947] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
=== CONT TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [18bf7079-ae54-4543-b0e1-f228daae1947] Running
=== CONT TestAddons/parallel/CSI
addons_test.go:502: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.08344284s
addons_test.go:507: (dbg) Run: kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:512: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run: kubectl --context addons-20210915012342-6768 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run: kubectl --context addons-20210915012342-6768 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:517: (dbg) Run: kubectl --context addons-20210915012342-6768 delete pod task-pv-pod
addons_test.go:517: (dbg) Done: kubectl --context addons-20210915012342-6768 delete pod task-pv-pod: (1.217428995s)
addons_test.go:523: (dbg) Run: kubectl --context addons-20210915012342-6768 delete pvc hpvc
addons_test.go:529: (dbg) Run: kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run: kubectl --context addons-20210915012342-6768 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run: kubectl --context addons-20210915012342-6768 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [016da038-7e39-46c3-9e82-2ac44a0118dd] Pending
helpers_test.go:343: "task-pv-pod-restore" [016da038-7e39-46c3-9e82-2ac44a0118dd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
=== CONT TestAddons/parallel/CSI
addons_test.go:544: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:544: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210915012342-6768 -n addons-20210915012342-6768
addons_test.go:544: TestAddons/parallel/CSI: showing logs for failed pods as of 2021-09-15 01:33:18.373959752 +0000 UTC m=+602.537135378
addons_test.go:544: (dbg) Run: kubectl --context addons-20210915012342-6768 describe po task-pv-pod-restore -n default
addons_test.go:544: (dbg) kubectl --context addons-20210915012342-6768 describe po task-pv-pod-restore -n default:
Name: task-pv-pod-restore
Namespace: default
Priority: 0
Node: addons-20210915012342-6768/192.168.49.2
Start Time: Wed, 15 Sep 2021 01:27:17 +0000
Labels: app=task-pv-pod-restore
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
task-pv-container:
Container ID:
Image: nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: k8s-minikube
GCP_PROJECT: k8s-minikube
GCLOUD_PROJECT: k8s-minikube
GOOGLE_CLOUD_PROJECT: k8s-minikube
CLOUDSDK_CORE_PROJECT: k8s-minikube
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rw7gr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
task-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hpvc-restore
ReadOnly: false
kube-api-access-rw7gr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m1s default-scheduler Successfully assigned default/task-pv-pod-restore to addons-20210915012342-6768
Normal SuccessfulAttachVolume 6m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03"
Warning FailedMount 111s (x10 over 6m) kubelet MountVolume.SetUp failed for volume "gcp-creds" : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
Warning FailedMount 102s (x2 over 3m58s) kubelet Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-rw7gr gcp-creds task-pv-storage]: timed out waiting for the condition
addons_test.go:544: (dbg) Run: kubectl --context addons-20210915012342-6768 logs task-pv-pod-restore -n default
addons_test.go:544: (dbg) Non-zero exit: kubectl --context addons-20210915012342-6768 logs task-pv-pod-restore -n default: exit status 1 (72.70227ms)
** stderr **
Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: ContainerCreating
** /stderr **
addons_test.go:544: kubectl --context addons-20210915012342-6768 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:545: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect addons-20210915012342-6768
helpers_test.go:236: (dbg) docker inspect addons-20210915012342-6768:
-- stdout --
[
{
"Id": "f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247",
"Created": "2021-09-15T01:24:09.767104389Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 9017,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-09-15T01:24:10.333554125Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
"ResolvConfPath": "/var/lib/docker/containers/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247/hostname",
"HostsPath": "/var/lib/docker/containers/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247/hosts",
"LogPath": "/var/lib/docker/containers/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247/f6a7f69382399d9ee4f92521cd5a2f240e72d1b8fae430888cb867e4585ea247-json.log",
"Name": "/addons-20210915012342-6768",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"addons-20210915012342-6768:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-20210915012342-6768",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [
{
"PathOnHost": "/dev/fuse",
"PathInContainer": "/dev/fuse",
"CgroupPermissions": "rwm"
}
],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/51d9b234f14d1df94073fe2d04270bd2b618bcd7ba2653ce81c199e50032ed32-init/diff:/var/lib/docker/overlay2/c09202ba721929c0f83cfbe05a9a7edd19aceee2a2070d87f35e3eeb64726707/diff:/var/lib/docker/overlay2/e1fdd3ffc1180deb8a0e09cdea2ee8630bda0b622e281199b9b064a7a00561b8/diff:/var/lib/docker/overlay2/7fda6a5848e725fa138c142a377bb78189270dcb404c5d8d80760b6c0e4db5e5/diff:/var/lib/docker/overlay2/b125a57bbad6df390220e5b99a6efb9072c788968bce332e3182af0f92c47abf/diff:/var/lib/docker/overlay2/921ad870272b4b957705c1f8bc4d7ddb53f0315a22385aca7fe0226b36a0ca3c/diff:/var/lib/docker/overlay2/9227d4f119c8b653bf356ed31719e424fb15e6a421ac6d4f0d0c308d989d7fef/diff:/var/lib/docker/overlay2/d9fc973dfee4c8a9e042e1f0eb12ea209ea2274c6dde45e1b94cd507963c5bd5/diff:/var/lib/docker/overlay2/b61e09bd505af8ac96dc90e954ad045e8a9db6fecbd2b842d773d2732d6d9014/diff:/var/lib/docker/overlay2/ec01f649f59978eaae4f1d684cc0e41735bbb7d96a113159392dc8ca2af6f426/diff:/var/lib/docker/overlay2/9da7a2
4e6630bb7d8a35666f96dedeb46e677ac2ad87cd899812701e6a005cf3/diff:/var/lib/docker/overlay2/989e766078c3e8e94edc7f73233905d9ffa58606437ee657327cb81f7f0f84db/diff:/var/lib/docker/overlay2/138969e8d939ffe3390978e423bccef489b8e8043057f844f31ee8b576a448d6/diff:/var/lib/docker/overlay2/32070a9e35fe4dc3a93e90df77d68afde9d83d5e80e09b2d4bec6e9d69b3d916/diff:/var/lib/docker/overlay2/e98ae99d45a0b41ee2f27400daa7bbc81fb9d5b4997074d44839aa4b12f7bfa6/diff:/var/lib/docker/overlay2/8762d166d07ef547b66a7d8435c811a6a5c29371f0d3329eb7225355478d15e1/diff:/var/lib/docker/overlay2/06bb8873c66cd9c23f1e5dddfff72086bd7fb96a709c7828ca394021d7aa9f16/diff:/var/lib/docker/overlay2/bb88812041d10b5820592db379c1d5e010fd5f45726435935cf954c476a1b415/diff:/var/lib/docker/overlay2/1ed176dd388f5b30436eb399c22cd1ba158ceaf858bdb7287b2fbfc8d2e5bf14/diff:/var/lib/docker/overlay2/841c9fd7a64d2fabdc958fc73fe929f282447acf0c1b7236a82e465e71322cdb/diff:/var/lib/docker/overlay2/67e8ae2ce9ede87c152d76e197e8f97a780a6d877e9bae47bcbe9397f27bb009/diff:/var/lib/d
ocker/overlay2/38741c59600445f92d98b126f954d22cc91f0f17a9ee8f520ef7043ad6ae65b2/diff:/var/lib/docker/overlay2/11aa586cc62584d1ad51d305e8e0ab4ac6e0d4c59a6dcb9ef75d3383010b3123/diff:/var/lib/docker/overlay2/5d8f6d21e77b74bddfce3305f95a4b3f675f95d4f83ea6fd4c62d5990431d396/diff:/var/lib/docker/overlay2/89ecf90e7e64abad9349517382dcbf066d4e8405c1a506a4b891b486153023d4/diff:/var/lib/docker/overlay2/03343b56866387dcc649efb11cc50e123f141cc94713e6bd0c2c9bbc3434d33e/diff:/var/lib/docker/overlay2/3f91e9d35fbcde7722183a441bf8c99781b7b5a513faa4c1bb8558a4032d16f4/diff:/var/lib/docker/overlay2/840c99850a911f467995dad0b78247f9fad9f7129aefdfba282cec2ac545ae36/diff:/var/lib/docker/overlay2/bce9487b05b417af0ed326e59728f044c0cb9197f27450f37c06ce2d86299f82/diff:/var/lib/docker/overlay2/a03daf7ac351e27eeb3415580fa8e6712145052964da904a40687062073b9cb7/diff:/var/lib/docker/overlay2/d8c4f7ef1395988a5900bc0f4888bafe68cf81bb8b66253d25ef2d23f4c14faf/diff:/var/lib/docker/overlay2/97c6dad15bffcf7946e0f7affbbcbecd6d71eedfbec326858897c30494c
eced3/diff:/var/lib/docker/overlay2/f814b86959f1a92bff34a7366c025dc4e4059eafb72a6d03d7ae44f2372942b1/diff:/var/lib/docker/overlay2/5fcfeb62c4286d549aa184405e89f5a2a73c30bdd969089642958abcd3d1878b/diff:/var/lib/docker/overlay2/f470952a996d35d6c6e34072852573a7656669d3791436814686a3fc712d9315/diff:/var/lib/docker/overlay2/5f594f121798b617f8190d75f72a91913ed67054c5262fce467bc025910fe6c1/diff:/var/lib/docker/overlay2/fd74cf7beb49d3fece3b06f4752e25c3581ba5069e9852d31ae298b24a6bbe1c/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff",
"MergedDir": "/var/lib/docker/overlay2/51d9b234f14d1df94073fe2d04270bd2b618bcd7ba2653ce81c199e50032ed32/merged",
"UpperDir": "/var/lib/docker/overlay2/51d9b234f14d1df94073fe2d04270bd2b618bcd7ba2653ce81c199e50032ed32/diff",
"WorkDir": "/var/lib/docker/overlay2/51d9b234f14d1df94073fe2d04270bd2b618bcd7ba2653ce81c199e50032ed32/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-20210915012342-6768",
"Source": "/var/lib/docker/volumes/addons-20210915012342-6768/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-20210915012342-6768",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-20210915012342-6768",
"name.minikube.sigs.k8s.io": "addons-20210915012342-6768",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "3cf8a1ef7e2ad7d5fb48ed1fd15191f2c3c9ba6b683146e26edd7d91e240043e",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
]
},
"SandboxKey": "/var/run/docker/netns/3cf8a1ef7e2a",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-20210915012342-6768": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"f6a7f6938239"
],
"NetworkID": "7af3f8389e1586aaac65a3567d1879209c123d113523e5fa9d723966e614a202",
"EndpointID": "c36314264a398a5fcc69901176c13072d6aea588b6d352f6a4f700dde4d74e16",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210915012342-6768 -n addons-20210915012342-6768
helpers_test.go:245: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p addons-20210915012342-6768 logs -n 25
helpers_test.go:253: TestAddons/parallel/CSI logs:
-- stdout --
*
* ==> Audit <==
* |---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete | --all | download-only-20210915012315-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:38 UTC | Wed, 15 Sep 2021 01:23:38 UTC |
| delete | -p | download-only-20210915012315-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:38 UTC | Wed, 15 Sep 2021 01:23:38 UTC |
| | download-only-20210915012315-6768 | | | | | |
| delete | -p | download-only-20210915012315-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:38 UTC | Wed, 15 Sep 2021 01:23:39 UTC |
| | download-only-20210915012315-6768 | | | | | |
| delete | -p | download-docker-20210915012339-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:42 UTC | Wed, 15 Sep 2021 01:23:42 UTC |
| | download-docker-20210915012339-6768 | | | | | |
| start | -p addons-20210915012342-6768 | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:23:43 UTC | Wed, 15 Sep 2021 01:26:03 UTC |
| | --wait=true --memory=4000 | | | | | |
| | --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=olm | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=helm-tiller | | | | | |
| -p | addons-20210915012342-6768 | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:03 UTC | Wed, 15 Sep 2021 01:26:17 UTC |
| | addons enable gcp-auth | | | | | |
| -p | addons-20210915012342-6768 | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:17 UTC | Wed, 15 Sep 2021 01:26:27 UTC |
| | addons enable gcp-auth --force | | | | | |
| -p | addons-20210915012342-6768 | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:32 UTC | Wed, 15 Sep 2021 01:26:33 UTC |
| | addons disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| -p | addons-20210915012342-6768 ip | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:54 UTC | Wed, 15 Sep 2021 01:26:54 UTC |
| -p | addons-20210915012342-6768 | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:54 UTC | Wed, 15 Sep 2021 01:26:55 UTC |
| | addons disable registry | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| -p | addons-20210915012342-6768 | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:26:56 UTC | Wed, 15 Sep 2021 01:26:57 UTC |
| | addons disable helm-tiller | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| -p | addons-20210915012342-6768 | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:27:03 UTC | Wed, 15 Sep 2021 01:27:03 UTC |
| | addons disable gcp-auth | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| -p | addons-20210915012342-6768 ssh | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:27:05 UTC | Wed, 15 Sep 2021 01:27:06 UTC |
| | curl -s http://127.0.0.1/ -H | | | | | |
| | 'Host: nginx.example.com' | | | | | |
| -p | addons-20210915012342-6768 | addons-20210915012342-6768 | jenkins | v1.23.0 | Wed, 15 Sep 2021 01:27:06 UTC | Wed, 15 Sep 2021 01:27:34 UTC |
| | addons disable ingress | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/09/15 01:23:43
Running on machine: debian-jenkins-agent-11
Binary: Built with gc go1.17 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0915 01:23:43.042897 7717 out.go:298] Setting OutFile to fd 1 ...
I0915 01:23:43.042969 7717 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 01:23:43.042974 7717 out.go:311] Setting ErrFile to fd 2...
I0915 01:23:43.042980 7717 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 01:23:43.043100 7717 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/bin
I0915 01:23:43.043369 7717 out.go:305] Setting JSON to false
I0915 01:23:43.076298 7717 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":386,"bootTime":1631668637,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0915 01:23:43.076406 7717 start.go:121] virtualization: kvm guest
I0915 01:23:43.078554 7717 out.go:177] * [addons-20210915012342-6768] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
I0915 01:23:43.078694 7717 notify.go:169] Checking for updates...
I0915 01:23:43.080022 7717 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
I0915 01:23:43.081416 7717 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0915 01:23:43.082672 7717 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube
I0915 01:23:43.083817 7717 out.go:177] - MINIKUBE_LOCATION=12425
I0915 01:23:43.083997 7717 driver.go:343] Setting default libvirt URI to qemu:///system
I0915 01:23:43.127173 7717 docker.go:132] docker version: linux-19.03.15
I0915 01:23:43.127258 7717 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 01:23:43.202779 7717 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:182 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-09-15 01:23:43.15849678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0915 01:23:43.202893 7717 docker.go:237] overlay module found
I0915 01:23:43.204689 7717 out.go:177] * Using the docker driver based on user configuration
I0915 01:23:43.204708 7717 start.go:278] selected driver: docker
I0915 01:23:43.204714 7717 start.go:751] validating driver "docker" against <nil>
I0915 01:23:43.204733 7717 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0915 01:23:43.204774 7717 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0915 01:23:43.204794 7717 out.go:242] ! Your cgroup does not allow setting memory.
I0915 01:23:43.206184 7717 out.go:177] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0915 01:23:43.206923 7717 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 01:23:43.276661 7717 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:182 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-09-15 01:23:43.238196574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0915 01:23:43.276741 7717 start_flags.go:264] no existing cluster config was found, will generate one from the flags
I0915 01:23:43.276873 7717 start_flags.go:737] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0915 01:23:43.276893 7717 cni.go:93] Creating CNI manager for ""
I0915 01:23:43.276899 7717 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0915 01:23:43.276905 7717 start_flags.go:278] config:
{Name:addons-20210915012342-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:addons-20210915012342-6768 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 01:23:43.278754 7717 out.go:177] * Starting control plane node addons-20210915012342-6768 in cluster addons-20210915012342-6768
I0915 01:23:43.278774 7717 cache.go:118] Beginning downloading kic base image for docker with docker
I0915 01:23:43.280394 7717 out.go:177] * Pulling base image ...
I0915 01:23:43.280429 7717 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
I0915 01:23:43.280460 7717 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
I0915 01:23:43.280472 7717 cache.go:57] Caching tarball of preloaded images
I0915 01:23:43.280531 7717 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
I0915 01:23:43.280616 7717 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0915 01:23:43.280636 7717 cache.go:60] Finished verifying existence of preloaded tar for v1.22.1 on docker
I0915 01:23:43.280911 7717 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/config.json ...
I0915 01:23:43.280943 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/config.json: {Name:mk3c1835448dfec8bc8961af8314578e08ae9a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:23:43.361279 7717 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
I0915 01:23:43.361425 7717 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
I0915 01:23:43.361442 7717 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory, skipping pull
I0915 01:23:43.361447 7717 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in cache, skipping pull
I0915 01:23:43.361462 7717 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 as a tarball
I0915 01:23:43.361471 7717 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 from local cache
I0915 01:24:06.834928 7717 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 from cached tarball
I0915 01:24:06.834965 7717 cache.go:206] Successfully downloaded all kic artifacts
I0915 01:24:06.835002 7717 start.go:313] acquiring machines lock for addons-20210915012342-6768: {Name:mkc7ac9c365edb65286f5fa8828239238f7b72b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 01:24:06.835113 7717 start.go:317] acquired machines lock for "addons-20210915012342-6768" in 87.695µs
I0915 01:24:06.835136 7717 start.go:89] Provisioning new machine with config: &{Name:addons-20210915012342-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:addons-20210915012342-6768 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
I0915 01:24:06.835200 7717 start.go:126] createHost starting for "" (driver="docker")
I0915 01:24:06.837459 7717 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0915 01:24:06.837672 7717 start.go:160] libmachine.API.Create for "addons-20210915012342-6768" (driver="docker")
I0915 01:24:06.837701 7717 client.go:168] LocalClient.Create starting
I0915 01:24:06.837814 7717 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem
I0915 01:24:07.123885 7717 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/cert.pem
I0915 01:24:07.214001 7717 cli_runner.go:115] Run: docker network inspect addons-20210915012342-6768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0915 01:24:07.249160 7717 cli_runner.go:162] docker network inspect addons-20210915012342-6768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0915 01:24:07.249250 7717 network_create.go:255] running [docker network inspect addons-20210915012342-6768] to gather additional debugging logs...
I0915 01:24:07.249273 7717 cli_runner.go:115] Run: docker network inspect addons-20210915012342-6768
W0915 01:24:07.282180 7717 cli_runner.go:162] docker network inspect addons-20210915012342-6768 returned with exit code 1
I0915 01:24:07.282206 7717 network_create.go:258] error running [docker network inspect addons-20210915012342-6768]: docker network inspect addons-20210915012342-6768: exit status 1
stdout:
[]
stderr:
Error: No such network: addons-20210915012342-6768
I0915 01:24:07.282218 7717 network_create.go:260] output of [docker network inspect addons-20210915012342-6768]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: addons-20210915012342-6768
** /stderr **
I0915 01:24:07.282260 7717 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 01:24:07.315859 7717 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007183c0] misses:0}
I0915 01:24:07.315897 7717 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0915 01:24:07.315912 7717 network_create.go:106] attempt to create docker network addons-20210915012342-6768 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0915 01:24:07.315949 7717 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210915012342-6768
I0915 01:24:07.387632 7717 network_create.go:90] docker network addons-20210915012342-6768 192.168.49.0/24 created
I0915 01:24:07.387667 7717 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210915012342-6768" container
I0915 01:24:07.387724 7717 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0915 01:24:07.421887 7717 cli_runner.go:115] Run: docker volume create addons-20210915012342-6768 --label name.minikube.sigs.k8s.io=addons-20210915012342-6768 --label created_by.minikube.sigs.k8s.io=true
I0915 01:24:07.456567 7717 oci.go:102] Successfully created a docker volume addons-20210915012342-6768
I0915 01:24:07.456638 7717 cli_runner.go:115] Run: docker run --rm --name addons-20210915012342-6768-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210915012342-6768 --entrypoint /usr/bin/test -v addons-20210915012342-6768:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib
I0915 01:24:09.647715 7717 cli_runner.go:168] Completed: docker run --rm --name addons-20210915012342-6768-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210915012342-6768 --entrypoint /usr/bin/test -v addons-20210915012342-6768:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib: (2.191021089s)
I0915 01:24:09.647763 7717 oci.go:106] Successfully prepared a docker volume addons-20210915012342-6768
W0915 01:24:09.647796 7717 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0915 01:24:09.647808 7717 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0915 01:24:09.647812 7717 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
I0915 01:24:09.647841 7717 kic.go:179] Starting extracting preloaded images to volume ...
I0915 01:24:09.647857 7717 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0915 01:24:09.647899 7717 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210915012342-6768:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir
I0915 01:24:09.729679 7717 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210915012342-6768 --name addons-20210915012342-6768 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210915012342-6768 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210915012342-6768 --network addons-20210915012342-6768 --ip 192.168.49.2 --volume addons-20210915012342-6768:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56
I0915 01:24:10.342564 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Running}}
I0915 01:24:10.381353 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:10.423362 7717 cli_runner.go:115] Run: docker exec addons-20210915012342-6768 stat /var/lib/dpkg/alternatives/iptables
I0915 01:24:10.549001 7717 oci.go:281] the created container "addons-20210915012342-6768" has a running status.
I0915 01:24:10.549056 7717 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa...
I0915 01:24:10.733442 7717 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0915 01:24:11.130073 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:11.169086 7717 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0915 01:24:11.169107 7717 kic_runner.go:115] Args: [docker exec --privileged addons-20210915012342-6768 chown docker:docker /home/docker/.ssh/authorized_keys]
I0915 01:24:13.139010 7717 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210915012342-6768:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir: (3.491067455s)
I0915 01:24:13.139044 7717 kic.go:188] duration metric: took 3.491201 seconds to extract preloaded images to volume
I0915 01:24:13.139124 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:13.174330 7717 machine.go:88] provisioning docker machine ...
I0915 01:24:13.174364 7717 ubuntu.go:169] provisioning hostname "addons-20210915012342-6768"
I0915 01:24:13.174419 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:13.207894 7717 main.go:130] libmachine: Using SSH client type: native
I0915 01:24:13.208085 7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0915 01:24:13.208100 7717 main.go:130] libmachine: About to run SSH command:
sudo hostname addons-20210915012342-6768 && echo "addons-20210915012342-6768" | sudo tee /etc/hostname
I0915 01:24:13.395488 7717 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210915012342-6768
I0915 01:24:13.395557 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:13.431035 7717 main.go:130] libmachine: Using SSH client type: native
I0915 01:24:13.431185 7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0915 01:24:13.431207 7717 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-20210915012342-6768' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210915012342-6768/g' /etc/hosts;
else
echo '127.0.1.1 addons-20210915012342-6768' | sudo tee -a /etc/hosts;
fi
fi
I0915 01:24:13.535165 7717 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0915 01:24:13.535196 7717 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube}
I0915 01:24:13.535220 7717 ubuntu.go:177] setting up certificates
I0915 01:24:13.535232 7717 provision.go:83] configureAuth start
I0915 01:24:13.535284 7717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210915012342-6768
I0915 01:24:13.570457 7717 provision.go:138] copyHostCerts
I0915 01:24:13.570528 7717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/cert.pem (1123 bytes)
I0915 01:24:13.570617 7717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/key.pem (1679 bytes)
I0915 01:24:13.570666 7717 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.pem (1078 bytes)
I0915 01:24:13.570711 7717 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca-key.pem org=jenkins.addons-20210915012342-6768 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210915012342-6768]
I0915 01:24:13.803214 7717 provision.go:172] copyRemoteCerts
I0915 01:24:13.803261 7717 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0915 01:24:13.803290 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:13.839139 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:13.918310 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0915 01:24:13.935224 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0915 01:24:13.949906 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0915 01:24:13.964223 7717 provision.go:86] duration metric: configureAuth took 428.982888ms
I0915 01:24:13.964242 7717 ubuntu.go:193] setting minikube options for container-runtime
I0915 01:24:13.964386 7717 config.go:177] Loaded profile config "addons-20210915012342-6768": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0915 01:24:13.964430 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:13.999813 7717 main.go:130] libmachine: Using SSH client type: native
I0915 01:24:13.999958 7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0915 01:24:13.999976 7717 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0915 01:24:14.105168 7717 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
I0915 01:24:14.105191 7717 ubuntu.go:71] root file system type: overlay
I0915 01:24:14.105379 7717 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0915 01:24:14.105434 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:14.141278 7717 main.go:130] libmachine: Using SSH client type: native
I0915 01:24:14.141417 7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0915 01:24:14.141475 7717 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0915 01:24:14.250866 7717 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0915 01:24:14.250942 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:14.287057 7717 main.go:130] libmachine: Using SSH client type: native
I0915 01:24:14.287213 7717 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0915 01:24:14.287234 7717 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0915 01:24:14.858052 7717 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2021-07-30 19:52:33.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-09-15 01:24:14.247123693 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0915 01:24:14.858083 7717 machine.go:91] provisioned docker machine in 1.683733527s
I0915 01:24:14.858094 7717 client.go:171] LocalClient.Create took 8.020386686s
I0915 01:24:14.858104 7717 start.go:168] duration metric: libmachine.API.Create for "addons-20210915012342-6768" took 8.020432581s
I0915 01:24:14.858113 7717 start.go:267] post-start starting for "addons-20210915012342-6768" (driver="docker")
I0915 01:24:14.858118 7717 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0915 01:24:14.858175 7717 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0915 01:24:14.858214 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:14.893956 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:14.974875 7717 ssh_runner.go:152] Run: cat /etc/os-release
I0915 01:24:14.977367 7717 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0915 01:24:14.977390 7717 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0915 01:24:14.977401 7717 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0915 01:24:14.977408 7717 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0915 01:24:14.977418 7717 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/addons for local assets ...
I0915 01:24:14.977469 7717 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/files for local assets ...
I0915 01:24:14.977496 7717 start.go:270] post-start completed in 119.376941ms
I0915 01:24:14.977758 7717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210915012342-6768
I0915 01:24:15.012312 7717 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/config.json ...
I0915 01:24:15.012510 7717 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0915 01:24:15.012569 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:15.047276 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:15.124397 7717 start.go:129] duration metric: createHost completed in 8.289186304s
I0915 01:24:15.124425 7717 start.go:80] releasing machines lock for "addons-20210915012342-6768", held for 8.289299626s
I0915 01:24:15.124490 7717 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210915012342-6768
I0915 01:24:15.159494 7717 ssh_runner.go:152] Run: systemctl --version
I0915 01:24:15.159541 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:15.159553 7717 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
I0915 01:24:15.159594 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:15.200965 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:15.202135 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:15.333961 7717 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
I0915 01:24:15.342280 7717 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0915 01:24:15.350251 7717 cruntime.go:255] skipping containerd shutdown because we are bound to it
I0915 01:24:15.350294 7717 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
I0915 01:24:15.357943 7717 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0915 01:24:15.368784 7717 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
I0915 01:24:15.425088 7717 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
I0915 01:24:15.477832 7717 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0915 01:24:15.485761 7717 ssh_runner.go:152] Run: sudo systemctl daemon-reload
I0915 01:24:15.537495 7717 ssh_runner.go:152] Run: sudo systemctl start docker
I0915 01:24:15.545498 7717 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0915 01:24:15.581742 7717 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0915 01:24:15.620804 7717 out.go:204] * Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
I0915 01:24:15.620881 7717 cli_runner.go:115] Run: docker network inspect addons-20210915012342-6768 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 01:24:15.654470 7717 ssh_runner.go:152] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0915 01:24:15.657487 7717 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0915 01:24:15.665636 7717 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
I0915 01:24:15.665683 7717 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I0915 01:24:15.693738 7717 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5
kubernetesui/dashboard:v2.1.0
kubernetesui/metrics-scraper:v1.0.4
-- /stdout --
I0915 01:24:15.693757 7717 docker.go:489] Images already preloaded, skipping extraction
I0915 01:24:15.693791 7717 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I0915 01:24:15.720708 7717 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5
kubernetesui/dashboard:v2.1.0
kubernetesui/metrics-scraper:v1.0.4
-- /stdout --
I0915 01:24:15.720729 7717 cache_images.go:78] Images are preloaded, skipping loading
I0915 01:24:15.720773 7717 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
I0915 01:24:15.794624 7717 cni.go:93] Creating CNI manager for ""
I0915 01:24:15.794641 7717 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0915 01:24:15.794648 7717 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0915 01:24:15.794658 7717 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210915012342-6768 NodeName:addons-20210915012342-6768 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0915 01:24:15.794774 7717 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "addons-20210915012342-6768"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.22.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0915 01:24:15.794863 7717 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210915012342-6768 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.22.1 ClusterName:addons-20210915012342-6768 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0915 01:24:15.794928 7717 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
I0915 01:24:15.802950 7717 binaries.go:44] Found k8s binaries, skipping transfer
I0915 01:24:15.802998 7717 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0915 01:24:15.809031 7717 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (352 bytes)
I0915 01:24:15.820140 7717 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0915 01:24:15.830993 7717 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
I0915 01:24:15.841714 7717 ssh_runner.go:152] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0915 01:24:15.844328 7717 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0915 01:24:15.852253 7717 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768 for IP: 192.168.49.2
I0915 01:24:15.852288 7717 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.key
I0915 01:24:15.994441 7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.crt ...
I0915 01:24:15.994473 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.crt: {Name:mk82e6b53d2785698b6872502d05efcc2184b0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:15.994641 7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.key ...
I0915 01:24:15.994653 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.key: {Name:mk3eb9792b7cf8e12aa1e54183d2ecab549452d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:15.994728 7717 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.key
I0915 01:24:16.380381 7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.crt ...
I0915 01:24:16.380419 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.crt: {Name:mkae95b129223c0a4b86eab2eee067267f086ef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:16.380619 7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.key ...
I0915 01:24:16.380632 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.key: {Name:mkab485fae645d4fa089e28b4dd468821ba71f8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:16.380746 7717 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.key
I0915 01:24:16.380758 7717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt with IP's: []
I0915 01:24:16.564833 7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt ...
I0915 01:24:16.564870 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.crt: {Name:mk3e81de32efda483a4ab503975f0ec212d4e7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:16.565062 7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.key ...
I0915 01:24:16.565077 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/client.key: {Name:mk995c7bd3a360efbd0d06dd026804113b452757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:16.565164 7717 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key.dd3b5fb2
I0915 01:24:16.565175 7717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0915 01:24:16.750988 7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt.dd3b5fb2 ...
I0915 01:24:16.751026 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt.dd3b5fb2: {Name:mk5de554520d0b626bf0e2bec0fe05f0f559dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:16.751207 7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key.dd3b5fb2 ...
I0915 01:24:16.751219 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key.dd3b5fb2: {Name:mk315e0bd186039b3eff0de60eb16ab99d6ae1f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:16.751306 7717 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt
I0915 01:24:16.751429 7717 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key
I0915 01:24:16.751489 7717 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.key
I0915 01:24:16.751498 7717 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.crt with IP's: []
I0915 01:24:16.951055 7717 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.crt ...
I0915 01:24:16.951082 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.crt: {Name:mkd30313ce32e16a3a2ef08933646a28bb8e3826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:16.951251 7717 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.key ...
I0915 01:24:16.951264 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.key: {Name:mke8365a3af75830a535fcade17439f64d264638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:16.951474 7717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca-key.pem (1679 bytes)
I0915 01:24:16.951511 7717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/ca.pem (1078 bytes)
I0915 01:24:16.951533 7717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/cert.pem (1123 bytes)
I0915 01:24:16.951552 7717 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/certs/key.pem (1679 bytes)
I0915 01:24:16.952422 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0915 01:24:16.969110 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0915 01:24:16.984223 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0915 01:24:16.999307 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/profiles/addons-20210915012342-6768/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0915 01:24:17.013872 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0915 01:24:17.028189 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0915 01:24:17.042420 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0915 01:24:17.056678 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0915 01:24:17.070952 7717 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0915 01:24:17.085257 7717 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0915 01:24:17.095807 7717 ssh_runner.go:152] Run: openssl version
I0915 01:24:17.100299 7717 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0915 01:24:17.108521 7717 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0915 01:24:17.111151 7717 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 01:24 /usr/share/ca-certificates/minikubeCA.pem
I0915 01:24:17.111193 7717 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0915 01:24:17.115429 7717 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0915 01:24:17.121690 7717 kubeadm.go:390] StartCluster: {Name:addons-20210915012342-6768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:addons-20210915012342-6768 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 01:24:17.121782 7717 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0915 01:24:17.150462 7717 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0915 01:24:17.156694 7717 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0915 01:24:17.162528 7717 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0915 01:24:17.162577 7717 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0915 01:24:17.168324 7717 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0915 01:24:17.168354 7717 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0915 01:24:30.215708 7717 out.go:204] - Generating certificates and keys ...
I0915 01:24:30.218847 7717 out.go:204] - Booting up control plane ...
I0915 01:24:30.221495 7717 out.go:204] - Configuring RBAC rules ...
I0915 01:24:30.224120 7717 cni.go:93] Creating CNI manager for ""
I0915 01:24:30.224137 7717 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0915 01:24:30.224166 7717 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0915 01:24:30.224307 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:30.224399 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3 minikube.k8s.io/name=addons-20210915012342-6768 minikube.k8s.io/updated_at=2021_09_15T01_24_30_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:30.564117 7717 ops.go:34] apiserver oom_adj: -16
I0915 01:24:30.564201 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:31.114911 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:31.614576 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:32.114492 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:32.615099 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:33.114794 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:33.614356 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:34.115137 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:34.614947 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:35.115015 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:35.614368 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:36.114877 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:36.614425 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:37.115312 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:37.614796 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:38.114666 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:38.615236 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:39.115344 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:39.614535 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:40.114800 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:40.614934 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:41.114782 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:41.614898 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:42.115358 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:42.614597 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:43.114557 7717 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 01:24:43.171583 7717 kubeadm.go:985] duration metric: took 12.947329577s to wait for elevateKubeSystemPrivileges.
I0915 01:24:43.171611 7717 kubeadm.go:392] StartCluster complete in 26.049927534s
I0915 01:24:43.171627 7717 settings.go:142] acquiring lock: {Name:mk9e57581826ef1ab9c29fc377d83267ef74c695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:43.171746 7717 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig
I0915 01:24:43.172265 7717 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/kubeconfig: {Name:mkf4cafc535fa65fd368ee043668c4a421c567e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 01:24:43.688182 7717 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210915012342-6768" rescaled to 1
I0915 01:24:43.688250 7717 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
I0915 01:24:43.690008 7717 out.go:177] * Verifying Kubernetes components...
I0915 01:24:43.690072 7717 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0915 01:24:43.688296 7717 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0915 01:24:43.688311 7717 addons.go:404] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
I0915 01:24:43.690178 7717 addons.go:65] Setting ingress=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690190 7717 addons.go:65] Setting metrics-server=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690198 7717 addons.go:153] Setting addon ingress=true in "addons-20210915012342-6768"
I0915 01:24:43.690205 7717 addons.go:153] Setting addon metrics-server=true in "addons-20210915012342-6768"
I0915 01:24:43.690216 7717 addons.go:65] Setting registry=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690231 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
I0915 01:24:43.690233 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
I0915 01:24:43.688453 7717 config.go:177] Loaded profile config "addons-20210915012342-6768": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0915 01:24:43.690248 7717 addons.go:65] Setting olm=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690257 7717 addons.go:153] Setting addon olm=true in "addons-20210915012342-6768"
I0915 01:24:43.690282 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
I0915 01:24:43.690296 7717 addons.go:65] Setting helm-tiller=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690237 7717 addons.go:153] Setting addon registry=true in "addons-20210915012342-6768"
I0915 01:24:43.690314 7717 addons.go:153] Setting addon helm-tiller=true in "addons-20210915012342-6768"
I0915 01:24:43.690298 7717 addons.go:65] Setting default-storageclass=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690334 7717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210915012342-6768"
I0915 01:24:43.690326 7717 addons.go:65] Setting storage-provisioner=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690348 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
I0915 01:24:43.690361 7717 addons.go:153] Setting addon storage-provisioner=true in "addons-20210915012342-6768"
I0915 01:24:43.690359 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
W0915 01:24:43.690377 7717 addons.go:165] addon storage-provisioner should already be in state true
I0915 01:24:43.690409 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
I0915 01:24:43.690680 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.690766 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.690773 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.690808 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.690830 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.690178 7717 addons.go:65] Setting volumesnapshots=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690865 7717 addons.go:153] Setting addon volumesnapshots=true in "addons-20210915012342-6768"
I0915 01:24:43.690877 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.690883 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
I0915 01:24:43.690904 7717 addons.go:65] Setting csi-hostpath-driver=true in profile "addons-20210915012342-6768"
I0915 01:24:43.690940 7717 addons.go:153] Setting addon csi-hostpath-driver=true in "addons-20210915012342-6768"
I0915 01:24:43.690967 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
I0915 01:24:43.691294 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.691383 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.691391 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.806180 7717 out.go:177] - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
I0915 01:24:43.806270 7717 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0915 01:24:43.806281 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
I0915 01:24:43.806340 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:43.811583 7717 out.go:177] - Using image ghcr.io/helm/tiller:v2.16.12
I0915 01:24:43.811706 7717 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0915 01:24:43.811715 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2423 bytes)
I0915 01:24:43.811763 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:43.815294 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
I0915 01:24:43.816807 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
I0915 01:24:43.818230 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
I0915 01:24:43.820400 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
I0915 01:24:43.818420 7717 out.go:177] - Using image quay.io/operatorhubio/catalog:latest
I0915 01:24:43.824097 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
I0915 01:24:43.825571 7717 out.go:177] - Using image quay.io/operator-framework/olm
I0915 01:24:43.827045 7717 out.go:177] - Using image registry:2.7.1
I0915 01:24:43.828423 7717 out.go:177] - Using image gcr.io/google_containers/kube-registry-proxy:0.4
I0915 01:24:43.828532 7717 addons.go:337] installing /etc/kubernetes/addons/registry-rc.yaml
I0915 01:24:43.828543 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
I0915 01:24:43.828593 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:43.818260 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
I0915 01:24:43.824743 7717 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0915 01:24:43.833924 7717 out.go:177] - Using image k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3
I0915 01:24:43.836791 7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0915 01:24:43.837026 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0915 01:24:43.837090 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:43.837738 7717 node_ready.go:35] waiting up to 6m0s for node "addons-20210915012342-6768" to be "Ready" ...
I0915 01:24:43.837865 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
I0915 01:24:43.837918 7717 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0915 01:24:43.840820 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
I0915 01:24:43.838004 7717 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0915 01:24:43.842681 7717 out.go:177] - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
I0915 01:24:43.840998 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0915 01:24:43.841859 7717 addons.go:153] Setting addon default-storageclass=true in "addons-20210915012342-6768"
I0915 01:24:43.842519 7717 node_ready.go:49] node "addons-20210915012342-6768" has status "Ready":"True"
I0915 01:24:43.844083 7717 out.go:177] - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
I0915 01:24:43.844169 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
W0915 01:24:43.845438 7717 addons.go:165] addon default-storageclass should already be in state true
I0915 01:24:43.845459 7717 node_ready.go:38] duration metric: took 7.683952ms waiting for node "addons-20210915012342-6768" to be "Ready" ...
I0915 01:24:43.845469 7717 host.go:66] Checking if "addons-20210915012342-6768" exists ...
I0915 01:24:43.845477 7717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 01:24:43.845615 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
I0915 01:24:43.846897 7717 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
I0915 01:24:43.846956 7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0915 01:24:43.846966 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0915 01:24:43.847014 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:43.846720 7717 addons.go:337] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0915 01:24:43.847095 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (17019 bytes)
I0915 01:24:43.847139 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:43.852220 7717 cli_runner.go:115] Run: docker container inspect addons-20210915012342-6768 --format={{.State.Status}}
I0915 01:24:43.854093 7717 addons.go:337] installing /etc/kubernetes/addons/crds.yaml
I0915 01:24:43.854140 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/crds.yaml (636901 bytes)
I0915 01:24:43.854220 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:43.876195 7717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kq6bp" in "kube-system" namespace to be "Ready" ...
I0915 01:24:43.890788 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:43.895792 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:43.908616 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:43.920110 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:43.929207 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:43.950380 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:43.965377 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:43.966151 7717 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
I0915 01:24:43.966169 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0915 01:24:43.966208 7717 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210915012342-6768
I0915 01:24:43.973648 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:44.001603 7717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-3216-d52130b292d08b0a6095e884aa0df76b8e13fcee/.minikube/machines/addons-20210915012342-6768/id_rsa Username:docker}
I0915 01:24:44.227528 7717 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0915 01:24:44.227560 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0915 01:24:44.229546 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0915 01:24:44.233077 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0915 01:24:44.308815 7717 addons.go:337] installing /etc/kubernetes/addons/olm.yaml
I0915 01:24:44.308840 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/olm.yaml (9929 bytes)
I0915 01:24:44.310305 7717 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0915 01:24:44.310354 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0915 01:24:44.310482 7717 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0915 01:24:44.310498 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
I0915 01:24:44.314115 7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
I0915 01:24:44.314133 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
I0915 01:24:44.326256 7717 addons.go:337] installing /etc/kubernetes/addons/registry-svc.yaml
I0915 01:24:44.326279 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0915 01:24:44.328964 7717 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0915 01:24:44.329023 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0915 01:24:44.415354 7717 start.go:729] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I0915 01:24:44.415808 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0915 01:24:44.416743 7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0915 01:24:44.416793 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
I0915 01:24:44.417490 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
I0915 01:24:44.420921 7717 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0915 01:24:44.420940 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0915 01:24:44.422783 7717 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0915 01:24:44.422801 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
I0915 01:24:44.428595 7717 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0915 01:24:44.428612 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
I0915 01:24:44.431511 7717 addons.go:337] installing /etc/kubernetes/addons/registry-proxy.yaml
I0915 01:24:44.431528 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
I0915 01:24:44.509822 7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0915 01:24:44.509847 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
I0915 01:24:44.512135 7717 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0915 01:24:44.512191 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
I0915 01:24:44.520465 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0915 01:24:44.524728 7717 addons.go:337] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0915 01:24:44.524751 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
I0915 01:24:44.608775 7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0915 01:24:44.608805 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
I0915 01:24:44.609056 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0915 01:24:44.611034 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0915 01:24:44.725500 7717 addons.go:337] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0915 01:24:44.725530 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
I0915 01:24:44.730287 7717 addons.go:337] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0915 01:24:44.730311 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
I0915 01:24:45.849684 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0915 01:24:45.850010 7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0915 01:24:45.850033 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
I0915 01:24:45.865539 7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0915 01:24:45.865565 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
I0915 01:24:45.880225 7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0915 01:24:45.880247 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
I0915 01:24:45.891905 7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
I0915 01:24:45.891926 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
I0915 01:24:45.905918 7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0915 01:24:45.905939 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
I0915 01:24:45.919002 7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
I0915 01:24:45.919025 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
I0915 01:24:45.932386 7717 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0915 01:24:45.932415 7717 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0915 01:24:45.943658 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0915 01:24:48.362733 7717 pod_ready.go:102] pod "coredns-78fcd69978-kq6bp" in "kube-system" namespace has status "Ready":"False"
I0915 01:24:50.070211 7717 pod_ready.go:92] pod "coredns-78fcd69978-kq6bp" in "kube-system" namespace has status "Ready":"True"
I0915 01:24:50.070236 7717 pod_ready.go:81] duration metric: took 6.194003932s waiting for pod "coredns-78fcd69978-kq6bp" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.070248 7717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-pmmqc" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.122129 7717 pod_ready.go:92] pod "coredns-78fcd69978-pmmqc" in "kube-system" namespace has status "Ready":"True"
I0915 01:24:50.122157 7717 pod_ready.go:81] duration metric: took 51.902017ms waiting for pod "coredns-78fcd69978-pmmqc" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.122170 7717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.216473 7717 pod_ready.go:92] pod "etcd-addons-20210915012342-6768" in "kube-system" namespace has status "Ready":"True"
I0915 01:24:50.216501 7717 pod_ready.go:81] duration metric: took 94.32156ms waiting for pod "etcd-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.216516 7717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.233771 7717 pod_ready.go:92] pod "kube-apiserver-addons-20210915012342-6768" in "kube-system" namespace has status "Ready":"True"
I0915 01:24:50.233853 7717 pod_ready.go:81] duration metric: took 17.327005ms waiting for pod "kube-apiserver-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.233884 7717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.322054 7717 pod_ready.go:92] pod "kube-controller-manager-addons-20210915012342-6768" in "kube-system" namespace has status "Ready":"True"
I0915 01:24:50.322084 7717 pod_ready.go:81] duration metric: took 88.176786ms waiting for pod "kube-controller-manager-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.322097 7717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xf8sd" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.614874 7717 pod_ready.go:92] pod "kube-proxy-xf8sd" in "kube-system" namespace has status "Ready":"True"
I0915 01:24:50.614976 7717 pod_ready.go:81] duration metric: took 292.86788ms waiting for pod "kube-proxy-xf8sd" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.615012 7717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.715585 7717 pod_ready.go:92] pod "kube-scheduler-addons-20210915012342-6768" in "kube-system" namespace has status "Ready":"True"
I0915 01:24:50.715611 7717 pod_ready.go:81] duration metric: took 100.54725ms waiting for pod "kube-scheduler-addons-20210915012342-6768" in "kube-system" namespace to be "Ready" ...
I0915 01:24:50.715622 7717 pod_ready.go:38] duration metric: took 6.870128841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 01:24:50.715641 7717 api_server.go:50] waiting for apiserver process to appear ...
I0915 01:24:50.715684 7717 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 01:24:52.226484 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.993376186s)
I0915 01:24:52.226583 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.997008647s)
I0915 01:24:52.226649 7717 addons.go:375] Verifying addon ingress=true in "addons-20210915012342-6768"
I0915 01:24:52.226672 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.810806473s)
I0915 01:24:52.228336 7717 out.go:177] * Verifying ingress addon...
I0915 01:24:52.230618 7717 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0915 01:24:52.318693 7717 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0915 01:24:52.318720 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:52.908898 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:53.528904 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.008409506s)
I0915 01:24:53.528975 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (9.111453263s)
I0915 01:24:53.528977 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.919889177s)
I0915 01:24:53.528993 7717 addons.go:375] Verifying addon registry=true in "addons-20210915012342-6768"
W0915 01:24:53.528999 7717 addons.go:358] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
stderr:
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
I0915 01:24:53.529030 7717 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
stderr:
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
I0915 01:24:53.529117 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.91806092s)
I0915 01:24:53.529139 7717 addons.go:375] Verifying addon metrics-server=true in "addons-20210915012342-6768"
I0915 01:24:53.530827 7717 out.go:177] * Verifying registry addon...
I0915 01:24:53.531429 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:53.529237 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.679516802s)
W0915 01:24:53.531587 7717 addons.go:358] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
I0915 01:24:53.531610 7717 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
I0915 01:24:53.532912 7717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0915 01:24:53.627039 7717 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0915 01:24:53.627067 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:53.805972 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
I0915 01:24:53.828448 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:53.892708 7717 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0915 01:24:54.214453 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:54.329361 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:54.329646 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.385943177s)
I0915 01:24:54.329816 7717 addons.go:375] Verifying addon csi-hostpath-driver=true in "addons-20210915012342-6768"
I0915 01:24:54.329777 7717 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.614078644s)
I0915 01:24:54.329993 7717 api_server.go:70] duration metric: took 10.641707566s to wait for apiserver process to appear ...
I0915 01:24:54.330024 7717 api_server.go:86] waiting for apiserver healthz status ...
I0915 01:24:54.330057 7717 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0915 01:24:54.331613 7717 out.go:177] * Verifying csi-hostpath-driver addon...
I0915 01:24:54.333837 7717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0915 01:24:54.414030 7717 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
ok
I0915 01:24:54.418035 7717 api_server.go:139] control plane version: v1.22.1
I0915 01:24:54.418064 7717 api_server.go:129] duration metric: took 88.013428ms to wait for apiserver health ...
I0915 01:24:54.418074 7717 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 01:24:54.421304 7717 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0915 01:24:54.421341 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:54.430869 7717 system_pods.go:59] 19 kube-system pods found
I0915 01:24:54.430908 7717 system_pods.go:61] "coredns-78fcd69978-kq6bp" [8ad7f1c9-28f3-46f9-9236-584bc24602ed] Running
I0915 01:24:54.430917 7717 system_pods.go:61] "coredns-78fcd69978-pmmqc" [4eae89e0-944f-47e4-90dc-bcb15825ce64] Running
I0915 01:24:54.430925 7717 system_pods.go:61] "csi-hostpath-attacher-0" [daa12cf1-a96e-4844-a77f-f18ec9ae48b6] Pending
I0915 01:24:54.430933 7717 system_pods.go:61] "csi-hostpath-provisioner-0" [6c3bfda3-b873-4f59-9154-1c1eebec79c6] Pending
I0915 01:24:54.430941 7717 system_pods.go:61] "csi-hostpath-resizer-0" [7917d661-b4e0-4df5-982d-eec18d58a5c7] Pending
I0915 01:24:54.430953 7717 system_pods.go:61] "csi-hostpath-snapshotter-0" [82c44e60-ff9d-421b-9a25-ff2c4e39cf20] Pending
I0915 01:24:54.430960 7717 system_pods.go:61] "csi-hostpathplugin-0" [d7439a1e-b83f-42da-a10b-9b53be29e4a5] Pending
I0915 01:24:54.430968 7717 system_pods.go:61] "etcd-addons-20210915012342-6768" [e81b6548-9356-458a-9287-10f8ee37d852] Running
I0915 01:24:54.430975 7717 system_pods.go:61] "kube-apiserver-addons-20210915012342-6768" [cf986ab2-560f-42d1-a77c-10379e41992e] Running
I0915 01:24:54.430984 7717 system_pods.go:61] "kube-controller-manager-addons-20210915012342-6768" [641f1bdb-2828-4878-9bbc-1451be499385] Running
I0915 01:24:54.430991 7717 system_pods.go:61] "kube-proxy-xf8sd" [b0c1ab2b-6d53-4f60-be02-018babe698ea] Running
I0915 01:24:54.430998 7717 system_pods.go:61] "kube-scheduler-addons-20210915012342-6768" [d401b548-7760-4572-8d56-bdec28034c57] Running
I0915 01:24:54.431011 7717 system_pods.go:61] "metrics-server-77c99ccb96-wpjcb" [47810a13-c9ae-42d6-a4b8-981ff0c391d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0915 01:24:54.431023 7717 system_pods.go:61] "registry-d2wk4" [89a45b74-b58c-468a-8c45-d173530c049f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0915 01:24:54.431034 7717 system_pods.go:61] "registry-proxy-vhhfv" [985239f4-f991-4990-bf20-39effa769ac7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0915 01:24:54.431046 7717 system_pods.go:61] "snapshot-controller-989f9ddc8-ff7mh" [e8b3528d-9cc8-4501-97dd-385861a0b54c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0915 01:24:54.431055 7717 system_pods.go:61] "snapshot-controller-989f9ddc8-wxnqh" [c198cc5e-bdb4-482a-a3a6-af7f9345c6e4] Pending
I0915 01:24:54.431065 7717 system_pods.go:61] "storage-provisioner" [e0d5c4d0-79f3-4daf-b2a3-dfed5458ee38] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 01:24:54.431075 7717 system_pods.go:61] "tiller-deploy-7d9fb5c894-gqw79" [7780487d-b571-401f-a059-bb6ed78f19c1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0915 01:24:54.431083 7717 system_pods.go:74] duration metric: took 13.003689ms to wait for pod list to return data ...
I0915 01:24:54.431093 7717 default_sa.go:34] waiting for default service account to be created ...
I0915 01:24:54.511268 7717 default_sa.go:45] found service account: "default"
I0915 01:24:54.511301 7717 default_sa.go:55] duration metric: took 80.200864ms for default service account to be created ...
I0915 01:24:54.511313 7717 system_pods.go:116] waiting for k8s-apps to be running ...
I0915 01:24:54.527991 7717 system_pods.go:86] 19 kube-system pods found
I0915 01:24:54.528072 7717 system_pods.go:89] "coredns-78fcd69978-kq6bp" [8ad7f1c9-28f3-46f9-9236-584bc24602ed] Running
I0915 01:24:54.528096 7717 system_pods.go:89] "coredns-78fcd69978-pmmqc" [4eae89e0-944f-47e4-90dc-bcb15825ce64] Running
I0915 01:24:54.528115 7717 system_pods.go:89] "csi-hostpath-attacher-0" [daa12cf1-a96e-4844-a77f-f18ec9ae48b6] Pending
I0915 01:24:54.528135 7717 system_pods.go:89] "csi-hostpath-provisioner-0" [6c3bfda3-b873-4f59-9154-1c1eebec79c6] Pending
I0915 01:24:54.528155 7717 system_pods.go:89] "csi-hostpath-resizer-0" [7917d661-b4e0-4df5-982d-eec18d58a5c7] Pending
I0915 01:24:54.528174 7717 system_pods.go:89] "csi-hostpath-snapshotter-0" [82c44e60-ff9d-421b-9a25-ff2c4e39cf20] Pending
I0915 01:24:54.528192 7717 system_pods.go:89] "csi-hostpathplugin-0" [d7439a1e-b83f-42da-a10b-9b53be29e4a5] Pending
I0915 01:24:54.528211 7717 system_pods.go:89] "etcd-addons-20210915012342-6768" [e81b6548-9356-458a-9287-10f8ee37d852] Running
I0915 01:24:54.528232 7717 system_pods.go:89] "kube-apiserver-addons-20210915012342-6768" [cf986ab2-560f-42d1-a77c-10379e41992e] Running
I0915 01:24:54.528253 7717 system_pods.go:89] "kube-controller-manager-addons-20210915012342-6768" [641f1bdb-2828-4878-9bbc-1451be499385] Running
I0915 01:24:54.528273 7717 system_pods.go:89] "kube-proxy-xf8sd" [b0c1ab2b-6d53-4f60-be02-018babe698ea] Running
I0915 01:24:54.528293 7717 system_pods.go:89] "kube-scheduler-addons-20210915012342-6768" [d401b548-7760-4572-8d56-bdec28034c57] Running
I0915 01:24:54.528320 7717 system_pods.go:89] "metrics-server-77c99ccb96-wpjcb" [47810a13-c9ae-42d6-a4b8-981ff0c391d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0915 01:24:54.528345 7717 system_pods.go:89] "registry-d2wk4" [89a45b74-b58c-468a-8c45-d173530c049f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0915 01:24:54.528375 7717 system_pods.go:89] "registry-proxy-vhhfv" [985239f4-f991-4990-bf20-39effa769ac7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0915 01:24:54.528399 7717 system_pods.go:89] "snapshot-controller-989f9ddc8-ff7mh" [e8b3528d-9cc8-4501-97dd-385861a0b54c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0915 01:24:54.528420 7717 system_pods.go:89] "snapshot-controller-989f9ddc8-wxnqh" [c198cc5e-bdb4-482a-a3a6-af7f9345c6e4] Pending
I0915 01:24:54.528443 7717 system_pods.go:89] "storage-provisioner" [e0d5c4d0-79f3-4daf-b2a3-dfed5458ee38] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 01:24:54.528465 7717 system_pods.go:89] "tiller-deploy-7d9fb5c894-gqw79" [7780487d-b571-401f-a059-bb6ed78f19c1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0915 01:24:54.528494 7717 system_pods.go:126] duration metric: took 17.173554ms to wait for k8s-apps to be running ...
I0915 01:24:54.528516 7717 system_svc.go:44] waiting for kubelet service to be running ....
I0915 01:24:54.528575 7717 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0915 01:24:54.712839 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:54.823121 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:55.010753 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:55.132002 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:55.324109 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:55.426879 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:55.632608 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:55.823798 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:55.927259 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:56.134020 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:56.323520 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:56.431178 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:56.632217 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:56.823999 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:56.929994 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:57.327624 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:57.328202 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:57.426674 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:57.631364 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:57.827886 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:57.928145 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:58.317054 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:58.411813 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:58.427498 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:58.631354 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:58.718964 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (4.912956583s)
I0915 01:24:58.719140 7717 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.826388801s)
I0915 01:24:58.719162 7717 ssh_runner.go:192] Completed: sudo systemctl is-active --quiet service kubelet: (4.190555614s)
I0915 01:24:58.719182 7717 system_svc.go:56] duration metric: took 4.190662202s WaitForService to wait for kubelet.
I0915 01:24:58.719192 7717 kubeadm.go:547] duration metric: took 15.030911841s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0915 01:24:58.719222 7717 node_conditions.go:102] verifying NodePressure condition ...
I0915 01:24:58.722617 7717 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
I0915 01:24:58.722647 7717 node_conditions.go:123] node cpu capacity is 8
I0915 01:24:58.722664 7717 node_conditions.go:105] duration metric: took 3.434293ms to run NodePressure ...
I0915 01:24:58.722676 7717 start.go:231] waiting for startup goroutines ...
I0915 01:24:58.822654 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:58.926707 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:59.132436 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:59.323380 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:59.432238 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:24:59.631373 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:24:59.822148 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:24:59.927068 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:00.131271 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:00.322618 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:00.427247 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:00.632482 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:00.822378 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:00.926371 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:01.131769 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:01.321461 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:01.426401 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:01.631514 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:01.822270 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:01.925910 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:02.131102 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:02.322479 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:02.426814 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:02.631710 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:02.822565 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:02.926580 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:03.132138 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:03.322327 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:03.425763 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:03.630716 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:03.822940 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:03.925954 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:04.131425 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:04.322998 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:04.427359 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:04.633795 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:04.822456 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:04.926493 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:05.130906 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:05.322938 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:05.425885 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:05.630681 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:05.822636 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:05.926232 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:06.131728 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:06.322644 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:06.426800 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:06.631296 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:06.822178 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:06.925942 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:07.130718 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:07.322537 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:07.426377 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:07.631345 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:07.822084 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:07.925806 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:08.130693 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:08.322799 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:08.425772 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:08.630765 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:08.822801 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:08.925477 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:09.131245 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:09.322209 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:09.425889 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:09.631060 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:09.822449 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:09.926105 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:10.135377 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:10.323234 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:10.426561 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:10.631840 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:10.823939 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:10.929475 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:11.131659 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:11.323232 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:11.426397 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:11.632062 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:11.822512 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:11.926521 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:12.131813 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:12.323130 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:12.430949 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:12.631580 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:12.822895 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:12.928543 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:13.133814 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:13.322772 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:13.426817 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:13.631393 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:13.823393 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:13.926426 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:14.132149 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:14.322568 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:14.427491 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:14.631752 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:14.822357 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:14.926196 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:15.131582 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 01:25:15.323112 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:15.427099 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:15.631092 7717 kapi.go:108] duration metric: took 22.098175317s to wait for kubernetes.io/minikube-addons=registry ...
I0915 01:25:15.822161 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:15.925934 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:16.322695 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:16.425527 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:16.822114 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:16.925749 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:17.322605 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:17.426164 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:17.821917 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:17.925354 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:18.322379 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:18.426193 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:18.822373 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:18.925814 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:19.322679 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:19.426332 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:19.822553 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:19.925553 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:20.322266 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:20.426324 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:20.821524 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:20.927034 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:21.322997 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:21.426650 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:21.823077 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:21.926275 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:22.321918 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:22.426734 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:22.823033 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:22.926909 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:23.323070 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:23.426795 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:23.823047 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:23.926986 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:24.323186 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:24.426780 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:24.824097 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:24.926283 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:25.322970 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:25.426609 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:25.822177 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:25.925828 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:26.322361 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:26.426287 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:26.822283 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:26.930396 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:27.326926 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:27.511664 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:27.822096 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:27.925880 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:28.322673 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:28.427154 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:28.821818 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:28.927831 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:29.321948 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:29.425965 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:29.822472 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:29.926839 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:30.322733 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:30.427056 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:30.824585 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:30.926356 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:31.321456 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:31.437011 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:31.821868 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:31.926229 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:32.322468 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:32.426279 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:32.822201 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:32.925430 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:33.322082 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:33.425328 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:33.821749 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:33.926281 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:34.322605 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:34.426647 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:34.822719 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:34.926368 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:35.322316 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:35.425572 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:35.822289 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:35.925885 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:36.322416 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:36.426361 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:36.823858 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:36.925550 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:37.321940 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:37.426200 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:37.822938 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:37.926483 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:38.322090 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:38.425596 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:38.822029 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:38.926972 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:39.322124 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:39.426802 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:39.821674 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:39.925750 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:40.322708 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:40.426376 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:40.822517 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:40.926527 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:41.322864 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:41.426473 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:41.821895 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:41.927066 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:42.322084 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:42.427037 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:42.822065 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:42.926167 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:43.321526 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:43.425935 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:43.822384 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:43.926491 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:44.323732 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:44.426261 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:44.823018 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:44.926272 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:45.323034 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:45.426847 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:45.822915 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:45.926530 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:46.322111 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:46.426690 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:46.822143 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:46.926141 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:47.322950 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:47.426007 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:47.822633 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:47.926148 7717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 01:25:48.322351 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:48.425990 7717 kapi.go:108] duration metric: took 54.092150944s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0915 01:25:48.822736 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:49.322112 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:49.822283 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:50.322558 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:50.822571 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:51.322712 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:51.822809 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:52.321841 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:52.821985 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:53.322924 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:53.822338 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:54.322769 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:54.823000 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:55.322150 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:55.822448 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:57.067289 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:57.323014 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:57.823245 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:58.322944 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:58.823212 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:59.322472 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:25:59.823142 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:26:00.323320 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:26:00.822926 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:26:01.322984 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:26:01.822237 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:26:02.323226 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:26:02.822827 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:26:03.322048 7717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 01:26:03.822051 7717 kapi.go:108] duration metric: took 1m11.591429322s to wait for app.kubernetes.io/name=ingress-nginx ...
I0915 01:26:03.823931 7717 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, helm-tiller, metrics-server, olm, volumesnapshots, registry, csi-hostpath-driver, ingress
I0915 01:26:03.823957 7717 addons.go:406] enableAddons completed in 1m20.135647286s
I0915 01:26:03.872049 7717 start.go:462] kubectl: 1.20.5, cluster: 1.22.1 (minor skew: 2)
I0915 01:26:03.873630 7717 out.go:177]
W0915 01:26:03.873770 7717 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.1.
I0915 01:26:03.875759 7717 out.go:177] - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
I0915 01:26:03.877156 7717 out.go:177] * Done! kubectl is now configured to use "addons-20210915012342-6768" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Wed 2021-09-15 01:24:10 UTC, end at Wed 2021-09-15 01:33:19 UTC. --
Sep 15 01:26:21 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:21.778062870Z" level=info msg="ignoring event" container=9193947f6df47d488fc9c1b17f5b0f6ebf6904906a2277b8810f6ff2d57e33a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:21 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:21.830744290Z" level=info msg="ignoring event" container=186feb1e4f0fc3bf20ada56e40f108afa91c6eaafd1a95c50bf7668ba1545f3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:22 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:22.625270992Z" level=info msg="ignoring event" container=8b0de9308e0cf9099d2f1ad5469c3829db02bf01b916f13f27ef86e8b8aa0c35 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:22 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:22.625329846Z" level=info msg="ignoring event" container=1f725407e8b365bedc43e8ee1d6e10e367f3ccefe22cdc66a248041549e2a3ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:23 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:23.147699611Z" level=warning msg="reference for unknown type: " digest="sha256:be9661afbd47e4042bee1cb48cae858cc2f4b4e121340ee69fdc0013aeffcca4" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:be9661afbd47e4042bee1cb48cae858cc2f4b4e121340ee69fdc0013aeffcca4"
Sep 15 01:26:30 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:30.427545879Z" level=info msg="ignoring event" container=809202351b32da83bd09d791d6b15f139813461258763b5a8b83db38504c34da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:31 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:31.623688678Z" level=info msg="ignoring event" container=3a614138064e3f3931786dabb374718f05df53b20bd98cef972022c8a8f1d219 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:33 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:33.527495741Z" level=info msg="ignoring event" container=df96358491977caebb7dfdce107da169a20dba5455ea8fffa78d5c9f5c6cdf65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:33 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:33.726454242Z" level=info msg="ignoring event" container=f10ad361b57219adec35985225d252a36d27b6e7b45092f5f55df95eb209430b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:35 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:35.958138080Z" level=info msg="ignoring event" container=ec8dcf2cd6eae34e29b838506c5c0166e7f558be877390384eeef411949dae75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:37 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:37.012692116Z" level=info msg="ignoring event" container=e77bc9b2c521a6d77f80c9af01bfe088dd94f216c55156f4e3741f1482c0df43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:53 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:53.329510673Z" level=info msg="ignoring event" container=4263b5374e25373ac2f0b30d7278af28718091aca6da880bbcc053f204bffecd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:54 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:54.433662598Z" level=info msg="ignoring event" container=22fc4717dc4b4d2d67033a3a33a819ec8f0aba87cbeff2a909778a78bfbf3983 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.317551853Z" level=info msg="ignoring event" container=e1934f56d6f4b370d7c6f9577455f2f210633fb88cec8e2c4d7500f1dd9e8957 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.343047518Z" level=info msg="ignoring event" container=22ac0eb72b29f3781170ee604ae114856560f0db07edeebb3363e1b446ce4890 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.447395487Z" level=info msg="ignoring event" container=d5ff237173b5932ad525ea265704baacf1f6e1bccf9eabffc1e72c45daa5efe1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.485134133Z" level=info msg="ignoring event" container=3834718a46bb7b7bd05dc5718310a481df5b8ee62c2da61212aac4d07eeeadab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:55 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:55.958465119Z" level=info msg="ignoring event" container=c527e733b084c5956b9674f3c16ce3d3b862c2751df48405aac339fb81ce6743 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:56 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:56.967816274Z" level=info msg="ignoring event" container=4b6d1533143217eb946f35ab5a14506836d14f88503596262c525404f4315110 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:57 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:57.325277029Z" level=info msg="ignoring event" container=814778a55cbffd769c5a36a7c39fdd21f959c7e92269d8141baea318659b1e03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:26:57 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:26:57.434624268Z" level=info msg="ignoring event" container=0e1b827a6c82cceff9c46f6dfc74bfe16e2c4fa11a09de56540c5a31c3237485 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:27:16 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:27:16.471763950Z" level=info msg="ignoring event" container=f650d8135069b7318befad394c739de2bec28c6df3a9f353a5bed0e46e0d43d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:27:16 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:27:16.582028100Z" level=info msg="ignoring event" container=c00b87888be2dafb25f38ed66cf4e1ab5b6405ac1b7f6959066712942d6c8ccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:27:18 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:27:18.242446129Z" level=info msg="ignoring event" container=f4bb935650aed2690df7a91b21305ce230f2c1d35154bd6fe9dd349786a30bc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 01:27:18 addons-20210915012342-6768 dockerd[449]: time="2021-09-15T01:27:18.339388306Z" level=info msg="ignoring event" container=3ffa766d7dc37a1fb01db6052d9cbd2a58b5b05d346371fc22e9bb502585518d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
5d1e36825d225 nginx@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3 6 minutes ago Running nginx 0 aadc41f12aa62
6a2921c82137e europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8 6 minutes ago Running private-image-eu 0 b5128bce8bd9c
520e510400113 quay.io/open-cluster-management/registration-operator@sha256:6aa2c4972f8526bffd1678d121ea19d4409feec2ad3db9a93f2ab06a7b1be7ef 6 minutes ago Running registration-operator 0 2481752cb8655
7cbae6523d919 quay.io/open-cluster-management/registration-operator@sha256:6aa2c4972f8526bffd1678d121ea19d4409feec2ad3db9a93f2ab06a7b1be7ef 6 minutes ago Running registration-operator 0 ab1ff66c00ace
046d462030e9d quay.io/open-cluster-management/registration-operator@sha256:6aa2c4972f8526bffd1678d121ea19d4409feec2ad3db9a93f2ab06a7b1be7ef 6 minutes ago Running registration-operator 0 601125b20b8d8
7d2c9db635690 us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8 6 minutes ago Running private-image 0 2279760bc97f8
ec8dcf2cd6eae quay.io/operator-framework/configmap-operator-registry@sha256:c42f92d2ef7953545c3b03aeebf39a00bbb16f40c0d2177561eb01a7f9eae32b 6 minutes ago Exited extract 0 e77bc9b2c521a
3a614138064e3 quay.io/operatorhubio/cluster-manager@sha256:a9225e745539308dbb7ff46c785dcacb9b1e5609f84a5557239a7ab8fc1906c1 6 minutes ago Exited pull 0 e77bc9b2c521a
9c603683ecfd3 busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1 6 minutes ago Running busybox 0 f6f2f6595cb80
809202351b32d 518fd05ba6b5b 6 minutes ago Exited util 0 e77bc9b2c521a
d0c6679691651 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:be9661afbd47e4042bee1cb48cae858cc2f4b4e121340ee69fdc0013aeffcca4 6 minutes ago Running gcp-auth 0 5a6bc363c963c
186feb1e4f0fc 17e55ec30f203 6 minutes ago Exited patch 0 8b0de9308e0cf
9193947f6df47 17e55ec30f203 6 minutes ago Exited create 0 1f725407e8b36
5557c478a8991 k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 7 minutes ago Running liveness-probe 0 ea76608e8f072
e60935786007c k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659 7 minutes ago Running hostpath 0 ea76608e8f072
8267ec18deb8a k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 7 minutes ago Running node-driver-registrar 0 ea76608e8f072
f00612f962f87 quay.io/operatorhubio/catalog@sha256:2c035752603aa817420c9964a8c1cc223e1acf8f9a6f07f05c53d75fa03c9125 7 minutes ago Running registry-server 0 d3b81c04d190c
6977f2c77abac k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16 7 minutes ago Running csi-external-health-monitor-controller 0 ea76608e8f072
936c5a9129b77 k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 7 minutes ago Running csi-snapshotter 0 b2f1050b60a84
eea707d225445 quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed 7 minutes ago Running packageserver 0 dad2d1e917c3a
9cd368ff987d8 quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed 7 minutes ago Running packageserver 0 920903b95a9cc
06110a635db22 k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 7 minutes ago Running csi-attacher 0 1cd1672c6a6b5
0eb43f8f2c2b9 k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 7 minutes ago Running csi-provisioner 0 8f290c522d5a5
38b151e3a393a k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a 7 minutes ago Running csi-resizer 0 b4cee9848f970
25ad1ae1b724f quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed 7 minutes ago Running catalog-operator 0 de1c17a793c74
025ac698fcc7f k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02 7 minutes ago Running csi-external-health-monitor-agent 0 ea76608e8f072
b01930895e3c7 quay.io/operator-framework/olm@sha256:e74b2ac57963c7f3ba19122a8c31c9f2a0deb3c0c5cac9e5323ccffd0ca198ed 7 minutes ago Running olm-operator 0 78e088f3fc1a5
4e76cf6e6db03 k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 7 minutes ago Running volume-snapshot-controller 0 4982f7797ee55
2afc322d8da78 k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 7 minutes ago Running volume-snapshot-controller 0 5beb8c82481a6
8d912400cc21c 6e38f40d628db 8 minutes ago Running storage-provisioner 0 0c5e12dfe9290
16b5a168478c7 8d147537fb7d1 8 minutes ago Running coredns 0 7ab4a132ed1e7
a51365e4d4520 36c4ebbc9d979 8 minutes ago Running kube-proxy 0 c65dc1df66da1
f79b2fc97e029 aca5ededae9c8 8 minutes ago Running kube-scheduler 0 f53e4b45d356a
6465e9569761d 0048118155842 8 minutes ago Running etcd 0 a9cf829977d4f
a591528a3fbd5 f30469a2491a5 8 minutes ago Running kube-apiserver 0 12ec9f889db74
e3b129b7bcd19 6e002eb89a881 8 minutes ago Running kube-controller-manager 0 3a866ddd96b4a
*
* ==> coredns [16b5a168478c] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
[INFO] Reloading complete
*
* ==> describe nodes <==
* Name: addons-20210915012342-6768
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-20210915012342-6768
kubernetes.io/os=linux
minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3
minikube.k8s.io/name=addons-20210915012342-6768
minikube.k8s.io/updated_at=2021_09_15T01_24_30_0700
minikube.k8s.io/version=v1.23.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-20210915012342-6768
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210915012342-6768"}
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 15 Sep 2021 01:24:27 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-20210915012342-6768
AcquireTime: <unset>
RenewTime: Wed, 15 Sep 2021 01:33:11 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 15 Sep 2021 01:32:32 +0000 Wed, 15 Sep 2021 01:24:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 15 Sep 2021 01:32:32 +0000 Wed, 15 Sep 2021 01:24:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 15 Sep 2021 01:32:32 +0000 Wed, 15 Sep 2021 01:24:24 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 15 Sep 2021 01:32:32 +0000 Wed, 15 Sep 2021 01:24:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-20210915012342-6768
Capacity:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951368Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951368Ki
pods: 110
System Info:
Machine ID: 4b5e5cdd53d44f5ab575bb522d42acca
System UUID: 7a73b86e-8e86-49cc-8445-3e859b641b86
Boot ID: 688de29f-953b-46f5-823d-9be4668e8e77
Kernel Version: 4.9.0-16-amd64
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.8
Kubelet Version: v1.22.1
Kube-Proxy Version: v1.22.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (28 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m51s
default nginx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m24s
default private-image-7ff9c8c74f-zr6nw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m42s
default private-image-eu-5956d58f9f-s4zkt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m26s
default task-pv-pod-restore 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m2s
gcp-auth gcp-auth-f6f59cc7c-qvf6p 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m59s
kube-system coredns-78fcd69978-pmmqc 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 8m37s
kube-system csi-hostpath-attacher-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m27s
kube-system csi-hostpath-provisioner-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m26s
kube-system csi-hostpath-resizer-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m25s
kube-system csi-hostpath-snapshotter-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m25s
kube-system csi-hostpathplugin-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m26s
kube-system etcd-addons-20210915012342-6768 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m49s
kube-system kube-apiserver-addons-20210915012342-6768 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m49s
kube-system kube-controller-manager-addons-20210915012342-6768 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m49s
kube-system kube-proxy-xf8sd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m37s
kube-system kube-scheduler-addons-20210915012342-6768 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m49s
kube-system snapshot-controller-989f9ddc8-ff7mh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m28s
kube-system snapshot-controller-989f9ddc8-wxnqh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m28s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m29s
my-etcd cluster-manager-794c6cc889-4lwmf 100m (1%!)(MISSING) 0 (0%!)(MISSING) 128Mi (0%!)(MISSING) 0 (0%!)(MISSING) 6m41s
my-etcd cluster-manager-794c6cc889-ldv66 100m (1%!)(MISSING) 0 (0%!)(MISSING) 128Mi (0%!)(MISSING) 0 (0%!)(MISSING) 6m41s
my-etcd cluster-manager-794c6cc889-x97lm 100m (1%!)(MISSING) 0 (0%!)(MISSING) 128Mi (0%!)(MISSING) 0 (0%!)(MISSING) 6m41s
olm catalog-operator-6d578c5764-4q694 10m (0%!)(MISSING) 0 (0%!)(MISSING) 80Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m26s
olm olm-operator-5b58594fc8-d98tv 10m (0%!)(MISSING) 0 (0%!)(MISSING) 160Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m26s
olm operatorhubio-catalog-jzdmc 10m (0%!)(MISSING) 0 (0%!)(MISSING) 50Mi (0%!)(MISSING) 0 (0%!)(MISSING) 7m49s
olm packageserver-5dc55c7c59-pb47t 10m (0%!)(MISSING) 0 (0%!)(MISSING) 50Mi (0%!)(MISSING) 0 (0%!)(MISSING) 7m53s
olm packageserver-5dc55c7c59-qsz8l 10m (0%!)(MISSING) 0 (0%!)(MISSING) 50Mi (0%!)(MISSING) 0 (0%!)(MISSING) 7m53s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1100m (13%!)(MISSING) 0 (0%!)(MISSING)
memory 944Mi (2%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m59s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m58s (x4 over 8m58s) kubelet Node addons-20210915012342-6768 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m58s (x4 over 8m58s) kubelet Node addons-20210915012342-6768 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m58s (x3 over 8m58s) kubelet Node addons-20210915012342-6768 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m58s kubelet Updated Node Allocatable limit across pods
Normal Starting 8m49s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m49s kubelet Node addons-20210915012342-6768 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m49s kubelet Node addons-20210915012342-6768 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m49s kubelet Node addons-20210915012342-6768 status is now: NodeHasSufficientPID
Normal NodeNotReady 8m49s kubelet Node addons-20210915012342-6768 status is now: NodeNotReady
Normal NodeAllocatableEnforced 8m49s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m39s kubelet Node addons-20210915012342-6768 status is now: NodeReady
*
* ==> dmesg <==
* [Sep15 01:17] #2
[ +0.004034] #3
[ +0.004037] #4
[ +0.003706] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[ +0.002514] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
[ +0.002963] #5
[ +0.003903] #6
[ +0.004267] #7
[ +0.079578] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +0.766714] i8042: Warning: Keylock active
[ +0.338122] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[ +0.008632] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ +0.018013] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
[ +0.017992] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ +2.917583] aufs: loading out-of-tree module taints kernel.
[Sep15 01:24] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [6465e9569761] <==
* {"level":"warn","ts":"2021-09-15T01:26:13.824Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"404.826218ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2021-09-15T01:26:13.824Z","caller":"traceutil/trace.go:171","msg":"trace[1770283280] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1299; }","duration":"404.854493ms","start":"2021-09-15T01:26:13.419Z","end":"2021-09-15T01:26:13.824Z","steps":["trace[1770283280] 'agreement among raft nodes before linearized reading' (duration: 404.814161ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T01:26:13.824Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"395.912704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
{"level":"info","ts":"2021-09-15T01:26:13.824Z","caller":"traceutil/trace.go:171","msg":"trace[66348658] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1299; }","duration":"395.950025ms","start":"2021-09-15T01:26:13.428Z","end":"2021-09-15T01:26:13.824Z","steps":["trace[66348658] 'agreement among raft nodes before linearized reading' (duration: 395.887826ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T01:26:13.824Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:13.428Z","time spent":"395.987814ms","remote":"127.0.0.1:41036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1152,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
{"level":"info","ts":"2021-09-15T01:26:29.341Z","caller":"traceutil/trace.go:171","msg":"trace[1499032662] linearizableReadLoop","detail":"{readStateIndex:1509; appliedIndex:1509; }","duration":"326.667885ms","start":"2021-09-15T01:26:29.014Z","end":"2021-09-15T01:26:29.341Z","steps":["trace[1499032662] 'read index received' (duration: 326.660582ms)","trace[1499032662] 'applied index is now lower than readState.Index' (duration: 6.259µs)"],"step_count":2}
{"level":"info","ts":"2021-09-15T01:26:29.341Z","caller":"traceutil/trace.go:171","msg":"trace[369463855] transaction","detail":"{read_only:false; response_revision:1435; number_of_response:1; }","duration":"181.710994ms","start":"2021-09-15T01:26:29.159Z","end":"2021-09-15T01:26:29.341Z","steps":["trace[369463855] 'process raft request' (duration: 181.394959ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T01:26:29.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"327.01863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2516"}
{"level":"info","ts":"2021-09-15T01:26:29.341Z","caller":"traceutil/trace.go:171","msg":"trace[735560544] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1434; }","duration":"327.091382ms","start":"2021-09-15T01:26:29.014Z","end":"2021-09-15T01:26:29.341Z","steps":["trace[735560544] 'agreement among raft nodes before linearized reading' (duration: 326.781489ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T01:26:29.341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:29.014Z","time spent":"327.13841ms","remote":"127.0.0.1:41040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":2540,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2021-09-15T01:26:29.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"318.34578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2021-09-15T01:26:29.341Z","caller":"traceutil/trace.go:171","msg":"trace[1460863496] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1435; }","duration":"318.385128ms","start":"2021-09-15T01:26:29.023Z","end":"2021-09-15T01:26:29.341Z","steps":["trace[1460863496] 'agreement among raft nodes before linearized reading' (duration: 318.266948ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T01:26:29.341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:29.023Z","time spent":"318.422406ms","remote":"127.0.0.1:41106","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":91,"response size":31,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true "}
{"level":"info","ts":"2021-09-15T01:26:34.569Z","caller":"traceutil/trace.go:171","msg":"trace[1200632049] linearizableReadLoop","detail":"{readStateIndex:1593; appliedIndex:1593; }","duration":"355.117105ms","start":"2021-09-15T01:26:34.214Z","end":"2021-09-15T01:26:34.569Z","steps":["trace[1200632049] 'read index received' (duration: 355.109079ms)","trace[1200632049] 'applied index is now lower than readState.Index' (duration: 6.615µs)"],"step_count":2}
{"level":"warn","ts":"2021-09-15T01:26:34.699Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"484.399027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/operators.coreos.com/operatorgroups/my-etcd/\" range_end:\"/registry/operators.coreos.com/operatorgroups/my-etcd0\" ","response":"range_response_count:1 size:919"}
{"level":"info","ts":"2021-09-15T01:26:34.699Z","caller":"traceutil/trace.go:171","msg":"trace[1257760560] range","detail":"{range_begin:/registry/operators.coreos.com/operatorgroups/my-etcd/; range_end:/registry/operators.coreos.com/operatorgroups/my-etcd0; response_count:1; response_revision:1514; }","duration":"484.484275ms","start":"2021-09-15T01:26:34.214Z","end":"2021-09-15T01:26:34.699Z","steps":["trace[1257760560] 'agreement among raft nodes before linearized reading' (duration: 355.215187ms)","trace[1257760560] 'range keys from in-memory index tree' (duration: 129.148361ms)"],"step_count":2}
{"level":"warn","ts":"2021-09-15T01:26:34.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:34.214Z","time spent":"484.546308ms","remote":"127.0.0.1:41502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":112,"response count":1,"response size":943,"request content":"key:\"/registry/operators.coreos.com/operatorgroups/my-etcd/\" range_end:\"/registry/operators.coreos.com/operatorgroups/my-etcd0\" "}
{"level":"warn","ts":"2021-09-15T01:26:34.699Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.31156ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128007659800411011 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/operators.coreos.com/subscriptions/my-etcd/cluster-manager\" mod_revision:1510 > success:<request_put:<key:\"/registry/operators.coreos.com/subscriptions/my-etcd/cluster-manager\" value_size:2560 >> failure:<request_range:<key:\"/registry/operators.coreos.com/subscriptions/my-etcd/cluster-manager\" > >>","response":"size:16"}
{"level":"info","ts":"2021-09-15T01:26:34.699Z","caller":"traceutil/trace.go:171","msg":"trace[612726848] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"286.37951ms","start":"2021-09-15T01:26:34.413Z","end":"2021-09-15T01:26:34.699Z","steps":["trace[612726848] 'process raft request' (duration: 156.918875ms)","trace[612726848] 'compare' (duration: 129.203371ms)"],"step_count":2}
{"level":"info","ts":"2021-09-15T01:26:34.702Z","caller":"traceutil/trace.go:171","msg":"trace[1181682201] linearizableReadLoop","detail":"{readStateIndex:1594; appliedIndex:1594; }","duration":"132.041722ms","start":"2021-09-15T01:26:34.570Z","end":"2021-09-15T01:26:34.702Z","steps":["trace[1181682201] 'read index received' (duration: 132.036075ms)","trace[1181682201] 'applied index is now lower than readState.Index' (duration: 4.476µs)"],"step_count":2}
{"level":"warn","ts":"2021-09-15T01:26:34.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"143.435067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/operators.coreos.com/clusterserviceversions/my-etcd/cluster-manager.v0.4.0\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2021-09-15T01:26:34.702Z","caller":"traceutil/trace.go:171","msg":"trace[2079300295] range","detail":"{range_begin:/registry/operators.coreos.com/clusterserviceversions/my-etcd/cluster-manager.v0.4.0; range_end:; response_count:0; response_revision:1515; }","duration":"143.480457ms","start":"2021-09-15T01:26:34.558Z","end":"2021-09-15T01:26:34.702Z","steps":["trace[2079300295] 'agreement among raft nodes before linearized reading' (duration: 143.423086ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T01:26:34.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"391.856044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
{"level":"info","ts":"2021-09-15T01:26:34.702Z","caller":"traceutil/trace.go:171","msg":"trace[1835690977] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1515; }","duration":"391.891432ms","start":"2021-09-15T01:26:34.310Z","end":"2021-09-15T01:26:34.702Z","steps":["trace[1835690977] 'agreement among raft nodes before linearized reading' (duration: 391.831591ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T01:26:34.702Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T01:26:34.310Z","time spent":"391.9473ms","remote":"127.0.0.1:41036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1152,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
*
* ==> kernel <==
* 01:33:19 up 16 min, 0 users, load average: 0.32, 2.32, 1.95
Linux addons-20210915012342-6768 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [a591528a3fbd] <==
* W0915 01:24:56.820200 1 handler_proxy.go:104] no RequestInfo found in the context
E0915 01:24:56.820255 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0915 01:24:56.820265 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0915 01:24:57.329448 1 controller.go:141] slow openapi aggregation of "operatorgroups.operators.coreos.com": 1.015152211s
I0915 01:24:57.730283 1 controller.go:611] quota admission added evaluator for: operatorgroups.operators.coreos.com
I0915 01:24:58.512188 1 controller.go:611] quota admission added evaluator for: clusterserviceversions.operators.coreos.com
I0915 01:24:58.708908 1 controller.go:611] quota admission added evaluator for: catalogsources.operators.coreos.com
E0915 01:25:09.737880 1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.244.62:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.244.62:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0915 01:25:26.811742 1 controller.go:611] quota admission added evaluator for: operatorconditions.operators.coreos.com
W0915 01:25:29.511486 1 handler_proxy.go:104] no RequestInfo found in the context
E0915 01:25:29.511539 1 controller.go:116] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0915 01:25:29.511551 1 controller.go:129] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
E0915 01:25:44.015954 1 available_controller.go:524] v1.packages.operators.coreos.com failed with: failing or missing response from https://10.111.224.203:5443/apis/packages.operators.coreos.com/v1: Get "https://10.111.224.203:5443/apis/packages.operators.coreos.com/v1": context deadline exceeded
I0915 01:25:57.065857 1 trace.go:205] Trace[1724058382]: "List etcd3" key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: (15-Sep-2021 01:25:56.320) (total time: 745ms):
Trace[1724058382]: [745.326531ms] [745.326531ms] END
I0915 01:25:57.066512 1 trace.go:205] Trace[950488762]: "List" url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:8da4f27d-db05-44bb-ac47-fd01fd39d0ac,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (15-Sep-2021 01:25:56.320) (total time: 746ms):
Trace[950488762]: ---"Listing from storage done" 745ms (01:25:57.065)
Trace[950488762]: [746.016629ms] [746.016629ms] END
I0915 01:26:28.361555 1 controller.go:611] quota admission added evaluator for: subscriptions.operators.coreos.com
I0915 01:26:28.959258 1 controller.go:611] quota admission added evaluator for: installplans.operators.coreos.com
I0915 01:26:55.546095 1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
I0915 01:27:13.012438 1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I0915 01:27:15.163184 1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
*
* ==> kube-controller-manager [e3b129b7bcd1] <==
* I0915 01:26:37.714735 1 event.go:291] "Event occurred" object="default/private-image" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set private-image-7ff9c8c74f to 1"
I0915 01:26:37.717943 1 event.go:291] "Event occurred" object="default/private-image-7ff9c8c74f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: private-image-7ff9c8c74f-zr6nw"
I0915 01:26:38.013235 1 event.go:291] "Event occurred" object="my-etcd/cluster-manager" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cluster-manager-794c6cc889 to 3"
I0915 01:26:38.035505 1 event.go:291] "Event occurred" object="my-etcd/cluster-manager-794c6cc889" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cluster-manager-794c6cc889-ldv66"
I0915 01:26:38.122535 1 event.go:291] "Event occurred" object="my-etcd/cluster-manager-794c6cc889" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cluster-manager-794c6cc889-4lwmf"
I0915 01:26:38.122571 1 event.go:291] "Event occurred" object="my-etcd/cluster-manager-794c6cc889" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cluster-manager-794c6cc889-x97lm"
I0915 01:26:43.310431 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0915 01:26:43.411414 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 01:26:53.136572 1 event.go:291] "Event occurred" object="default/private-image-eu" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set private-image-eu-5956d58f9f to 1"
I0915 01:26:53.216808 1 event.go:291] "Event occurred" object="default/private-image-eu-5956d58f9f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: private-image-eu-5956d58f9f-s4zkt"
I0915 01:26:57.445749 1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
I0915 01:26:59.021896 1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") from node "addons-20210915012342-6768"
I0915 01:26:59.671466 1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") from node "addons-20210915012342-6768"
I0915 01:26:59.671610 1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e\" "
I0915 01:27:06.923132 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
I0915 01:27:06.926944 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
E0915 01:27:11.719005 1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-mr4hf" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
I0915 01:27:17.740313 1 event.go:291] "Event occurred" object="default/hpvc-restore" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
I0915 01:27:18.002687 1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1092d423-15c4-11ec-87b4-0242ac11000d") from node "addons-20210915012342-6768"
I0915 01:27:18.565122 1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1092d423-15c4-11ec-87b4-0242ac11000d") from node "addons-20210915012342-6768"
I0915 01:27:18.565243 1 event.go:291] "Event occurred" object="default/task-pv-pod-restore" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03\" "
I0915 01:27:21.943567 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") on node "addons-20210915012342-6768"
I0915 01:27:21.945423 1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") on node "addons-20210915012342-6768"
I0915 01:27:22.487535 1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-3bab756c-4b9b-41dd-9e85-8afedc924a1e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^04799836-15c4-11ec-87b4-0242ac11000d") on node "addons-20210915012342-6768"
I0915 01:27:37.969584 1 namespace_controller.go:185] Namespace has been deleted ingress-nginx
*
* ==> kube-proxy [a51365e4d452] <==
* I0915 01:24:43.262791 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0915 01:24:43.262834 1 server_others.go:140] Detected node IP 192.168.49.2
W0915 01:24:43.262853 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I0915 01:24:43.281819 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0915 01:24:43.281911 1 server_others.go:212] Using iptables Proxier.
I0915 01:24:43.281924 1 server_others.go:219] creating dualStackProxier for iptables.
W0915 01:24:43.281937 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0915 01:24:43.282328 1 server.go:649] Version: v1.22.1
I0915 01:24:43.282908 1 config.go:315] Starting service config controller
I0915 01:24:43.282931 1 config.go:224] Starting endpoint slice config controller
I0915 01:24:43.282935 1 shared_informer.go:240] Waiting for caches to sync for service config
I0915 01:24:43.282942 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
E0915 01:24:43.285195 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"addons-20210915012342-6768.16a4da62e4104718", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc04870b6d0dc7294, ext:67064675, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-addons-20210915012342-6768", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"addons-2
0210915012342-6768", UID:"addons-20210915012342-6768", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "addons-20210915012342-6768.16a4da62e4104718" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0915 01:24:43.383971 1 shared_informer.go:247] Caches are synced for service config
I0915 01:24:43.383990 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [f79b2fc97e02] <==
* E0915 01:24:27.025243 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0915 01:24:27.027835 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0915 01:24:27.027859 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0915 01:24:27.027946 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0915 01:24:27.028062 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 01:24:27.028066 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0915 01:24:27.028123 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0915 01:24:27.028198 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0915 01:24:27.028225 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0915 01:24:27.028300 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 01:24:27.028319 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 01:24:27.028399 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 01:24:27.028398 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0915 01:24:27.028478 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0915 01:24:27.028482 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0915 01:24:27.918709 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0915 01:24:28.075731 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0915 01:24:28.129112 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 01:24:28.158174 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 01:24:28.189443 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0915 01:24:30.630161 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 01:24:30.631839 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 01:24:30.631876 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 01:24:30.698551 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I0915 01:24:30.824497 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Wed 2021-09-15 01:24:10 UTC, end at Wed 2021-09-15 01:33:19 UTC. --
Sep 15 01:29:25 addons-20210915012342-6768 kubelet[2283]: E0915 01:29:25.877609 2283 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/host-path/016da038-7e39-46c3-9e82-2ac44a0118dd-gcp-creds podName:016da038-7e39-46c3-9e82-2ac44a0118dd nodeName:}" failed. No retries permitted until 2021-09-15 01:31:27.877589675 +0000 UTC m=+417.978605527 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/016da038-7e39-46c3-9e82-2ac44a0118dd-gcp-creds") pod "task-pv-pod-restore" (UID: "016da038-7e39-46c3-9e82-2ac44a0118dd") : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
Sep 15 01:29:35 addons-20210915012342-6768 kubelet[2283]: I0915 01:29:35.325529 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-x97lm" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:29:38 addons-20210915012342-6768 kubelet[2283]: I0915 01:29:38.325793 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-s4zkt" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:29:41 addons-20210915012342-6768 kubelet[2283]: I0915 01:29:41.325304 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-zr6nw" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:30:22 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:22.327640 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-ldv66" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:30:25 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:25.325134 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:30:41 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:41.325725 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:30:43 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:43.324939 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-zr6nw" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:30:43 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:43.325000 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-4lwmf" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:30:47 addons-20210915012342-6768 kubelet[2283]: I0915 01:30:47.325164 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-x97lm" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:31:00 addons-20210915012342-6768 kubelet[2283]: I0915 01:31:00.325951 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-s4zkt" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:31:27 addons-20210915012342-6768 kubelet[2283]: E0915 01:31:27.923051 2283 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/host-path/016da038-7e39-46c3-9e82-2ac44a0118dd-gcp-creds podName:016da038-7e39-46c3-9e82-2ac44a0118dd nodeName:}" failed. No retries permitted until 2021-09-15 01:33:29.92302634 +0000 UTC m=+540.024042198 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/016da038-7e39-46c3-9e82-2ac44a0118dd-gcp-creds") pod "task-pv-pod-restore" (UID: "016da038-7e39-46c3-9e82-2ac44a0118dd") : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
Sep 15 01:31:36 addons-20210915012342-6768 kubelet[2283]: E0915 01:31:36.326171 2283 kubelet.go:1720] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-rw7gr gcp-creds task-pv-storage]: timed out waiting for the condition" pod="default/task-pv-pod-restore"
Sep 15 01:31:36 addons-20210915012342-6768 kubelet[2283]: E0915 01:31:36.326226 2283 pod_workers.go:747] "Error syncing pod, skipping" err="unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-rw7gr gcp-creds task-pv-storage]: timed out waiting for the condition" pod="default/task-pv-pod-restore" podUID=016da038-7e39-46c3-9e82-2ac44a0118dd
Sep 15 01:31:45 addons-20210915012342-6768 kubelet[2283]: I0915 01:31:45.325418 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-ldv66" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:31:48 addons-20210915012342-6768 kubelet[2283]: I0915 01:31:48.326052 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:31:49 addons-20210915012342-6768 kubelet[2283]: I0915 01:31:49.325086 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-x97lm" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:32:06 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:06.325170 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:32:06 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:06.325263 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-zr6nw" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:32:12 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:12.325039 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-4lwmf" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:32:17 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:17.325897 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-s4zkt" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:32:53 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:53.325941 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-x97lm" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:32:58 addons-20210915012342-6768 kubelet[2283]: I0915 01:32:58.325654 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:33:14 addons-20210915012342-6768 kubelet[2283]: I0915 01:33:14.325869 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-ldv66" secret="" err="secret \"gcp-auth\" not found"
Sep 15 01:33:17 addons-20210915012342-6768 kubelet[2283]: I0915 01:33:17.324974 2283 kubelet_pods.go:893] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-zr6nw" secret="" err="secret \"gcp-auth\" not found"
*
* ==> storage-provisioner [8d912400cc21] <==
* I0915 01:24:56.011202 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0915 01:24:56.111327 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0915 01:24:56.113121 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0915 01:24:56.219201 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0915 01:24:56.219579 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210915012342-6768_004bc0cb-d41e-40f2-ba44-6e9643158fa0!
I0915 01:24:56.229644 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f072d67-d6f1-4e15-bf6a-802d085768f6", APIVersion:"v1", ResourceVersion:"859", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210915012342-6768_004bc0cb-d41e-40f2-ba44-6e9643158fa0 became leader
I0915 01:24:56.521463 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210915012342-6768_004bc0cb-d41e-40f2-ba44-6e9643158fa0!
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210915012342-6768 -n addons-20210915012342-6768
helpers_test.go:262: (dbg) Run: kubectl --context addons-20210915012342-6768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: task-pv-pod-restore gcp-auth-certs-create--1-ndlhf gcp-auth-certs-patch--1-krrln 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:276: (dbg) Run: kubectl --context addons-20210915012342-6768 describe pod task-pv-pod-restore gcp-auth-certs-create--1-ndlhf gcp-auth-certs-patch--1-krrln 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210915012342-6768 describe pod task-pv-pod-restore gcp-auth-certs-create--1-ndlhf gcp-auth-certs-patch--1-krrln 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt: exit status 1 (67.309579ms)
-- stdout --
Name: task-pv-pod-restore
Namespace: default
Priority: 0
Node: addons-20210915012342-6768/192.168.49.2
Start Time: Wed, 15 Sep 2021 01:27:17 +0000
Labels: app=task-pv-pod-restore
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
task-pv-container:
Container ID:
Image: nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: k8s-minikube
GCP_PROJECT: k8s-minikube
GCLOUD_PROJECT: k8s-minikube
GOOGLE_CLOUD_PROJECT: k8s-minikube
CLOUDSDK_CORE_PROJECT: k8s-minikube
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rw7gr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
task-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hpvc-restore
ReadOnly: false
kube-api-access-rw7gr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m3s default-scheduler Successfully assigned default/task-pv-pod-restore to addons-20210915012342-6768
Normal SuccessfulAttachVolume 6m2s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-8c72ce18-f74a-4b33-a522-04c88edb4a03"
Warning FailedMount 113s (x10 over 6m2s) kubelet MountVolume.SetUp failed for volume "gcp-creds" : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
Warning FailedMount 104s (x2 over 4m) kubelet Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-rw7gr gcp-creds task-pv-storage]: timed out waiting for the condition
-- /stdout --
** stderr **
Error from server (NotFound): pods "gcp-auth-certs-create--1-ndlhf" not found
Error from server (NotFound): pods "gcp-auth-certs-patch--1-krrln" not found
Error from server (NotFound): pods "4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt" not found
** /stderr **
helpers_test.go:278: kubectl --context addons-20210915012342-6768 describe pod task-pv-pod-restore gcp-auth-certs-create--1-ndlhf gcp-auth-certs-patch--1-krrln 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-zvqvt: exit status 1
--- FAIL: TestAddons/parallel/CSI (383.27s)