=== RUN TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run: out/minikube-linux-amd64 pause -p pause-20210915191040-306161 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210915191040-306161 --alsologtostderr -v=5: exit status 80 (4.610152748s)
-- stdout --
* Pausing node pause-20210915191040-306161 ...
-- /stdout --
** stderr **
I0915 19:11:55.861377 483809 out.go:298] Setting OutFile to fd 1 ...
I0915 19:11:55.861590 483809 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 19:11:55.861599 483809 out.go:311] Setting ErrFile to fd 2...
I0915 19:11:55.861604 483809 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 19:11:55.861708 483809 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/bin
I0915 19:11:55.861881 483809 out.go:305] Setting JSON to false
I0915 19:11:55.861904 483809 mustload.go:65] Loading cluster: pause-20210915191040-306161
I0915 19:11:55.862178 483809 config.go:177] Loaded profile config "pause-20210915191040-306161": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0915 19:11:55.862574 483809 cli_runner.go:115] Run: docker container inspect pause-20210915191040-306161 --format={{.State.Status}}
I0915 19:11:55.903790 483809 host.go:66] Checking if "pause-20210915191040-306161" exists ...
I0915 19:11:55.904145 483809 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 19:11:55.996980 483809 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:199 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:59 SystemTime:2021-09-15 19:11:55.9467662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742192640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings
:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0915 19:11:55.997482 483809 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12425/minikube-v1.23.0-1631662909-12425.iso https://github.com/kubernetes/minikube/releases/download/v1.23.0-1631662909-12425/minikube-v1.23.0-1631662909-12425.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.23.0-1631662909-12425.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210915191040-306161 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
I0915 19:11:57.382484 483809 out.go:177] * Pausing node pause-20210915191040-306161 ...
I0915 19:11:57.382527 483809 host.go:66] Checking if "pause-20210915191040-306161" exists ...
I0915 19:11:57.382891 483809 ssh_runner.go:152] Run: systemctl --version
I0915 19:11:57.382947 483809 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915191040-306161
I0915 19:11:57.433562 483809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/pause-20210915191040-306161/id_rsa Username:docker}
I0915 19:11:57.513268 483809 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0915 19:11:57.522477 483809 pause.go:50] kubelet running: true
I0915 19:11:57.522552 483809 ssh_runner.go:152] Run: sudo systemctl disable --now kubelet
I0915 19:11:57.612799 483809 ssh_runner.go:152] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
I0915 19:11:57.649114 483809 docker.go:407] Pausing containers: [bba215ebb412 34c35294796d 3dcc70b67afa d15f2b587bbf 8dd7be01f740 74da63129b6f a5f849b43b1c b2b6bdeb15b9 27693cb21ee8 05fa6730f481 cd525100a832 5da3019bf389 2b95e975790b ce786a1a3e58]
I0915 19:11:57.649224 483809 ssh_runner.go:152] Run: docker pause bba215ebb412 34c35294796d 3dcc70b67afa d15f2b587bbf 8dd7be01f740 74da63129b6f a5f849b43b1c b2b6bdeb15b9 27693cb21ee8 05fa6730f481 cd525100a832 5da3019bf389 2b95e975790b ce786a1a3e58
I0915 19:12:00.395663 483809 ssh_runner.go:192] Completed: docker pause bba215ebb412 34c35294796d 3dcc70b67afa d15f2b587bbf 8dd7be01f740 74da63129b6f a5f849b43b1c b2b6bdeb15b9 27693cb21ee8 05fa6730f481 cd525100a832 5da3019bf389 2b95e975790b ce786a1a3e58: (2.746407978s)
I0915 19:12:00.398966 483809 out.go:177]
W0915 19:12:00.399171 483809 out.go:242] X Exiting due to GUEST_PAUSE: docker: docker pause bba215ebb412 34c35294796d 3dcc70b67afa d15f2b587bbf 8dd7be01f740 74da63129b6f a5f849b43b1c b2b6bdeb15b9 27693cb21ee8 05fa6730f481 cd525100a832 5da3019bf389 2b95e975790b ce786a1a3e58: Process exited with status 1
stdout:
bba215ebb412
34c35294796d
3dcc70b67afa
d15f2b587bbf
8dd7be01f740
74da63129b6f
a5f849b43b1c
b2b6bdeb15b9
27693cb21ee8
cd525100a832
5da3019bf389
2b95e975790b
ce786a1a3e58
stderr:
Error response from daemon: Cannot pause container 05fa6730f48169d2ac729d988cd9323576ee6eec8a609c586992b637f8c4d022: OCI runtime pause failed: unable to freeze: unknown
X Exiting due to GUEST_PAUSE: docker: docker pause bba215ebb412 34c35294796d 3dcc70b67afa d15f2b587bbf 8dd7be01f740 74da63129b6f a5f849b43b1c b2b6bdeb15b9 27693cb21ee8 05fa6730f481 cd525100a832 5da3019bf389 2b95e975790b ce786a1a3e58: Process exited with status 1
stdout:
bba215ebb412
34c35294796d
3dcc70b67afa
d15f2b587bbf
8dd7be01f740
74da63129b6f
a5f849b43b1c
b2b6bdeb15b9
27693cb21ee8
cd525100a832
5da3019bf389
2b95e975790b
ce786a1a3e58
stderr:
Error response from daemon: Cannot pause container 05fa6730f48169d2ac729d988cd9323576ee6eec8a609c586992b637f8c4d022: OCI runtime pause failed: unable to freeze: unknown
W0915 19:12:00.399190 483809 out.go:242] *
*
W0915 19:12:00.406063 483809 out.go:242] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0915 19:12:00.407611 483809 out.go:177]
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210915191040-306161 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect pause-20210915191040-306161
helpers_test.go:236: (dbg) docker inspect pause-20210915191040-306161:
-- stdout --
[
{
"Id": "bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061",
"Created": "2021-09-15T19:10:42.551354157Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 459889,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-09-15T19:10:43.169524869Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
"ResolvConfPath": "/var/lib/docker/containers/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061/hostname",
"HostsPath": "/var/lib/docker/containers/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061/hosts",
"LogPath": "/var/lib/docker/containers/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061-json.log",
"Name": "/pause-20210915191040-306161",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20210915191040-306161:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20210915191040-306161",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [
{
"PathOnHost": "/dev/fuse",
"PathInContainer": "/dev/fuse",
"CgroupPermissions": "rwm"
}
],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/5e3ffb7c28acf9d38fbd1aa28f21ed029817d0ac81f88df792ac7af13d5d5d8e-init/diff:/var/lib/docker/overlay2/42af7561be1bbb04e60d5898b29ceee8cccacce1c1e5ac5a5471bd2c2eb199c1/diff:/var/lib/docker/overlay2/751fc28b4872d1d49285c85a1cc36b03020834feb891b324f4934fc7bbb2a6ad/diff:/var/lib/docker/overlay2/0ecc58bf735cb0b519ef22a62daf0e530d708564e0b5e2a32c0d2dfbf82c31f1/diff:/var/lib/docker/overlay2/40f6c03795cd703755126d403a243ddfa2b5a61c04ba64a54188be2d675512f0/diff:/var/lib/docker/overlay2/c1ed82aa36dccda427851e6e631dda57f4ea197ec647ed726ad163b6ab37743f/diff:/var/lib/docker/overlay2/80d741d1e18c6360d99304165056959acb58e58f7c7d3b65d8d53565555b4030/diff:/var/lib/docker/overlay2/447eff9d25dbd151310c8d31d34423f404bc92869252861a8515d52575670208/diff:/var/lib/docker/overlay2/c0772139dfb4023aacb541a406a5e5d1697bd07fb74faf19f0f70c0378a6a0aa/diff:/var/lib/docker/overlay2/b6bb35ea8704143c911f2f148b8a83fb1595e56bdb785ad83f7f5b0ba255050b/diff:/var/lib/docker/overlay2/77cfc0
f9841fd241edda977799472f43ff62c670f7f0dc94936630ed52e1b91e/diff:/var/lib/docker/overlay2/47450527f7ce6a152d348dbe5dac4026cc11f3f715cf7e7b7ea7e797df3f5cf4/diff:/var/lib/docker/overlay2/5ea3e2569f810895df6596566930f3068e5c5d42831d6bda8de83d19c1185d3f/diff:/var/lib/docker/overlay2/d116c54fdfbaac2c30c98b1317c22640c46f07f3f9924da3e450e7cc3f026f3b/diff:/var/lib/docker/overlay2/4f1d2222e5e679ccc39dcf8be0e0caad5028cf91c0cc32b4025b79da0530cf79/diff:/var/lib/docker/overlay2/d395fbc24d899f00e3584b6298d4b2d46f9c834636fa54bf82aec3d8bf7aa1e4/diff:/var/lib/docker/overlay2/a294a18c3929a26e58691a4f489d1d4ed51d12a1c252dc68fdee0b9bddededd1/diff:/var/lib/docker/overlay2/8bba47552514f0de9059dcaf0fac98e702505d70e78a52d0394a82f5fc85fa37/diff:/var/lib/docker/overlay2/020c4987317db46a4f9cdc01cf984f014ead8756b3e1c340fb8f181a80339c1c/diff:/var/lib/docker/overlay2/a9def08cecacaa841f943f37f2b4027d2b8c9c6b2a0e55d4844d2c257eddd0e6/diff:/var/lib/docker/overlay2/17ef9dd7c4c56b129b7bcdf2629184e963776cba2a5484bd18ca3102b68d02cc/diff:/var/lib/d
ocker/overlay2/8d91d9e3464303a35c2d8a3a3f93f0ab30cd9fe6fa28c942ddc92769cf590ac9/diff:/var/lib/docker/overlay2/9830b73f3f06b9fc4f470e3b1dc16b8a3299272056c5131a298e6bb42a083941/diff:/var/lib/docker/overlay2/ea9d596d994ea5dfd5a26427720cb2e8361d797aa279267428df94c19a9d7c44/diff:/var/lib/docker/overlay2/2e1466e09da2855871ebe76fbcffcb15fd134652525feb0105fe152a084d210e/diff:/var/lib/docker/overlay2/9bdc54e7df3f3e5c2597a98b2737c550e37e1f78d16859fb6518d8b440ca0b08/diff:/var/lib/docker/overlay2/5037f238f4a3fe4dab6cd163114a4e1ff421eac25c08e1692759c11df7b3ee16/diff:/var/lib/docker/overlay2/873666dc974cc73345c5bdf7408c32b17137d25cc199834ebd8660d05c9481e2/diff:/var/lib/docker/overlay2/139b02d97d4069f11d6bc3503baa4785c0a78b8bd7bc9cb27acaf894f3107aa9/diff:/var/lib/docker/overlay2/6e6fd6abdb501264791d7c049ecc9089ab221a7aa6f4cbb745874fefd93cf871/diff:/var/lib/docker/overlay2/11575ed178ccca3cb09dfeb7628b08e11e6a3357f3659c2890a288ce9f2ba782/diff:/var/lib/docker/overlay2/bb3435980235e05ccce2d74ef847a0acd46c4edf0d38983388b271aa9dd
b6053/diff:/var/lib/docker/overlay2/0e3e4464369c9a27193d7ad87e67c5485363aeabca8fcbc6855eab5789e4c2a5/diff:/var/lib/docker/overlay2/41e71fdf4be3041a17553878254d63ea1c8c4063994bf2725203012ee9dd93c5/diff:/var/lib/docker/overlay2/76ee62595a4ab520c3233836c68c0bc0297b3907fff6db1b659f16f4325b8d3c/diff:/var/lib/docker/overlay2/dc211f5ed531b1debbcdd38dd7129b846d5e959af10f3144da2541b5c6f0233b/diff:/var/lib/docker/overlay2/6822a79442f86fb63c1067f542f9f6ce9288d3cec3ed6330fce4a6e3f48db77e/diff:/var/lib/docker/overlay2/757fc4ce96317e0c241e2344c927a20f4844398da422284baf7de7aebb6f828f/diff:/var/lib/docker/overlay2/3672ef17e397e206ae409a53d2bca46c162142ff01c54c787ba59f0bf2dad58a/diff:/var/lib/docker/overlay2/e04f51cc901a2cc97f893a3d4c6ebb453df442a6d57974d1b702d6ad1efab159/diff",
"MergedDir": "/var/lib/docker/overlay2/5e3ffb7c28acf9d38fbd1aa28f21ed029817d0ac81f88df792ac7af13d5d5d8e/merged",
"UpperDir": "/var/lib/docker/overlay2/5e3ffb7c28acf9d38fbd1aa28f21ed029817d0ac81f88df792ac7af13d5d5d8e/diff",
"WorkDir": "/var/lib/docker/overlay2/5e3ffb7c28acf9d38fbd1aa28f21ed029817d0ac81f88df792ac7af13d5d5d8e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-20210915191040-306161",
"Source": "/var/lib/docker/volumes/pause-20210915191040-306161/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-20210915191040-306161",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20210915191040-306161",
"name.minikube.sigs.k8s.io": "pause-20210915191040-306161",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "df07212579862d4f5ae99ffb739ae3c99da6a0412ff03dd0d6816a60189e779d",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33118"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33117"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33112"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33116"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33114"
}
]
},
"SandboxKey": "/var/run/docker/netns/df0721257986",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20210915191040-306161": {
"IPAMConfig": {
"IPv4Address": "192.168.58.2"
},
"Links": null,
"Aliases": [
"bca902ad6107"
],
"NetworkID": "515ea92b2bf138022bfb99d0055b7e5e76b1d54cb7cfb53912aad56a462f4219",
"EndpointID": "a897f77a4ed3bef902f3cd13240fca7c0c0e799541ca81af9ad32dd3573eec4d",
"Gateway": "192.168.58.1",
"IPAddress": "192.168.58.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:3a:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210915191040-306161 -n pause-20210915191040-306161
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210915191040-306161 -n pause-20210915191040-306161: exit status 2 (641.906414ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p pause-20210915191040-306161 logs -n 25
=== CONT TestPause/serial/PauseAgain
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210915191040-306161 logs -n 25: exit status 110 (13.276454511s)
-- stdout --
*
* ==> Audit <==
* |------------|--------------------------------------------|--------------------------------------------|----------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|------------|--------------------------------------------|--------------------------------------------|----------|---------|-------------------------------|-------------------------------|
| delete | -p | multinode-20210915185040-306161-m03 | jenkins | v1.23.0 | Wed, 15 Sep 2021 18:56:29 UTC | Wed, 15 Sep 2021 18:56:32 UTC |
| | multinode-20210915185040-306161-m03 | | | | | |
| delete | -p | multinode-20210915185040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 18:56:32 UTC | Wed, 15 Sep 2021 18:56:37 UTC |
| | multinode-20210915185040-306161 | | | | | |
| start | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:06:19 UTC | Wed, 15 Sep 2021 19:07:32 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.17.0 | | | | | |
| ssh | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:07:32 UTC | Wed, 15 Sep 2021 19:07:35 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| | -- docker pull busybox | | | | | |
| start | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:07:35 UTC | Wed, 15 Sep 2021 19:08:08 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --wait=true --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.17.3 | | | | | |
| ssh | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:08 UTC | Wed, 15 Sep 2021 19:08:09 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| | -- docker images | | | | | |
| delete | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:09 UTC | Wed, 15 Sep 2021 19:08:14 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| start | -p | scheduled-stop-20210915190814-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:14 UTC | Wed, 15 Sep 2021 19:08:36 UTC |
| | scheduled-stop-20210915190814-306161 | | | | | |
| | --memory=2048 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| stop | -p | scheduled-stop-20210915190814-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:36 UTC | Wed, 15 Sep 2021 19:08:36 UTC |
| | scheduled-stop-20210915190814-306161 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20210915190814-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:54 UTC | Wed, 15 Sep 2021 19:09:10 UTC |
| | scheduled-stop-20210915190814-306161 | | | | | |
| | --schedule 5s | | | | | |
| delete | -p | scheduled-stop-20210915190814-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:09:19 UTC | Wed, 15 Sep 2021 19:09:21 UTC |
| | scheduled-stop-20210915190814-306161 | | | | | |
| start | -p | skaffold-20210915190921-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:09:22 UTC | Wed, 15 Sep 2021 19:09:43 UTC |
| | skaffold-20210915190921-306161 | | | | | |
| | --memory=2600 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| docker-env | --shell none -p | skaffold-20210915190921-306161 | skaffold | v1.23.0 | Wed, 15 Sep 2021 19:09:44 UTC | Wed, 15 Sep 2021 19:09:44 UTC |
| | skaffold-20210915190921-306161 | | | | | |
| | --user=skaffold | | | | | |
| delete | -p | skaffold-20210915190921-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:27 UTC | Wed, 15 Sep 2021 19:10:30 UTC |
| | skaffold-20210915190921-306161 | | | | | |
| delete | -p | insufficient-storage-20210915191030-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:38 UTC | Wed, 15 Sep 2021 19:10:40 UTC |
| | insufficient-storage-20210915191030-306161 | | | | | |
| delete | -p | flannel-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:40 UTC | Wed, 15 Sep 2021 19:10:40 UTC |
| | flannel-20210915191040-306161 | | | | | |
| start | -p | force-systemd-flag-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:40 UTC | Wed, 15 Sep 2021 19:11:27 UTC |
| | force-systemd-flag-20210915191040-306161 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| -p | force-systemd-flag-20210915191040-306161 | force-systemd-flag-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:27 UTC | Wed, 15 Sep 2021 19:11:28 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | force-systemd-flag-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:28 UTC | Wed, 15 Sep 2021 19:11:30 UTC |
| | force-systemd-flag-20210915191040-306161 | | | | | |
| start | -p | offline-docker-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:40 UTC | Wed, 15 Sep 2021 19:11:42 UTC |
| | offline-docker-20210915191040-306161 | | | | | |
| | --alsologtostderr -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p pause-20210915191040-306161 | pause-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:40 UTC | Wed, 15 Sep 2021 19:11:42 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | offline-docker-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:42 UTC | Wed, 15 Sep 2021 19:11:50 UTC |
| | offline-docker-20210915191040-306161 | | | | | |
| start | -p pause-20210915191040-306161 | pause-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:43 UTC | Wed, 15 Sep 2021 19:11:52 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| pause | -p pause-20210915191040-306161 | pause-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:52 UTC | Wed, 15 Sep 2021 19:11:52 UTC |
| | --alsologtostderr -v=5 | | | | | |
| unpause | -p pause-20210915191040-306161 | pause-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:53 UTC | Wed, 15 Sep 2021 19:11:55 UTC |
| | --alsologtostderr -v=5 | | | | | |
|------------|--------------------------------------------|--------------------------------------------|----------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/09/15 19:11:52
Running on machine: debian-jenkins-agent-3
Binary: Built with gc go1.17 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0915 19:11:52.646190 482400 out.go:298] Setting OutFile to fd 1 ...
I0915 19:11:52.646289 482400 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 19:11:52.646298 482400 out.go:311] Setting ErrFile to fd 2...
I0915 19:11:52.646304 482400 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 19:11:52.646451 482400 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/bin
I0915 19:11:52.646723 482400 out.go:305] Setting JSON to false
I0915 19:11:52.709402 482400 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":17490,"bootTime":1631715623,"procs":261,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0915 19:11:52.709580 482400 start.go:121] virtualization: kvm guest
I0915 19:11:52.712706 482400 out.go:177] * [stopped-upgrade-20210915191040-306161] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
I0915 19:11:52.724951 482400 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/kubeconfig
I0915 19:11:52.712947 482400 notify.go:169] Checking for updates...
I0915 19:11:52.729893 482400 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0915 19:11:52.732520 482400 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube
I0915 19:11:52.736050 482400 out.go:177] - MINIKUBE_LOCATION=12425
I0915 19:11:52.736631 482400 config.go:177] Loaded profile config "stopped-upgrade-20210915191040-306161": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
I0915 19:11:52.736653 482400 start_flags.go:561] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56
I0915 19:11:52.742215 482400 out.go:177] * Kubernetes 1.22.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.1
I0915 19:11:52.742288 482400 driver.go:343] Setting default libvirt URI to qemu:///system
I0915 19:11:52.817439 482400 docker.go:132] docker version: linux-19.03.15
I0915 19:11:52.817558 482400 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 19:11:52.947450 482400 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:199 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 19:11:52.876518695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742192640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0915 19:11:52.947559 482400 docker.go:237] overlay module found
I0915 19:11:52.949261 482400 out.go:177] * Using the docker driver based on existing profile
I0915 19:11:52.949289 482400 start.go:278] selected driver: docker
I0915 19:11:52.949296 482400 start.go:751] validating driver "docker" against &{Name:stopped-upgrade-20210915191040-306161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210915191040-306161 Namespace: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 19:11:52.949383 482400 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0915 19:11:52.949423 482400 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0915 19:11:52.949440 482400 out.go:242] ! Your cgroup does not allow setting memory.
I0915 19:11:52.951286 482400 out.go:177] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0915 19:11:52.952197 482400 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 19:11:53.065988 482400 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:199 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 19:11:52.997961998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742192640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
W0915 19:11:53.066165 482400 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0915 19:11:53.066201 482400 out.go:242] ! Your cgroup does not allow setting memory.
I0915 19:11:53.068504 482400 out.go:177] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0915 19:11:53.068615 482400 cni.go:93] Creating CNI manager for ""
I0915 19:11:53.068639 482400 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0915 19:11:53.068650 482400 start_flags.go:278] config:
{Name:stopped-upgrade-20210915191040-306161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210915191040-306161 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 19:11:53.070689 482400 out.go:177] * Starting control plane node stopped-upgrade-20210915191040-306161 in cluster stopped-upgrade-20210915191040-306161
I0915 19:11:53.070719 482400 cache.go:118] Beginning downloading kic base image for docker with docker
I0915 19:11:53.072376 482400 out.go:177] * Pulling base image ...
I0915 19:11:53.072419 482400 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0915 19:11:53.072526 482400 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
W0915 19:11:53.144041 482400 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0915 19:11:53.144225 482400 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/stopped-upgrade-20210915191040-306161/config.json ...
I0915 19:11:53.144592 482400 cache.go:108] acquiring lock: {Name:mk6b9e7d3773167a64e148aca319303ee031f9ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.144748 482400 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
I0915 19:11:53.144767 482400 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 184.439µs
I0915 19:11:53.144781 482400 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
I0915 19:11:53.144760 482400 cache.go:108] acquiring lock: {Name:mk585c58d0101944c482cec7adaff8d8f7ec59d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.144796 482400 cache.go:108] acquiring lock: {Name:mk403ba4c0fcf40250a1e4cabb5f46b09336815f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.144827 482400 cache.go:108] acquiring lock: {Name:mkeae6c789ec45d617b2ccf67c0c9f46df3527b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.144889 482400 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
I0915 19:11:53.144906 482400 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 112.731µs
I0915 19:11:53.144951 482400 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
I0915 19:11:53.144973 482400 image.go:135] retrieving image: k8s.gcr.io/coredns:1.6.7
I0915 19:11:53.144969 482400 cache.go:108] acquiring lock: {Name:mk59f9a64799ea42370cf9937feecb10f3c510bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.145077 482400 image.go:135] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
I0915 19:11:53.145131 482400 cache.go:108] acquiring lock: {Name:mk86067ad82c4611ff746f3e055c785e01961386 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.145184 482400 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0915 19:11:53.145198 482400 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.724µs
I0915 19:11:53.145209 482400 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0915 19:11:53.145225 482400 cache.go:108] acquiring lock: {Name:mk6305aaf40bbdef6a3d1d54eebacee91a81701a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.145270 482400 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
I0915 19:11:53.145261 482400 cache.go:108] acquiring lock: {Name:mk4da25d12084ed57de36e92e2221428939fc275 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.145280 482400 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 57.293µs
I0915 19:11:53.145290 482400 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
I0915 19:11:53.145377 482400 image.go:135] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
I0915 19:11:53.145419 482400 image.go:135] retrieving image: k8s.gcr.io/pause:3.2
I0915 19:11:53.145498 482400 cache.go:108] acquiring lock: {Name:mkbc6a4dde5b13a178e2ded1914cdd64fd4b3072 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.145573 482400 image.go:135] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
I0915 19:11:53.145640 482400 cache.go:108] acquiring lock: {Name:mk515309a95102e653769009e406a4e6cf62bd44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.145714 482400 image.go:135] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
I0915 19:11:53.146525 482400 image.go:181] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
I0915 19:11:53.146596 482400 image.go:181] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
I0915 19:11:53.146599 482400 image.go:181] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
I0915 19:11:53.146817 482400 image.go:181] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
I0915 19:11:53.146879 482400 image.go:181] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
I0915 19:11:53.156270 482400 image.go:177] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc00068a360 lock:{state:0 sema:0} manifest:<nil>}
I0915 19:11:53.156310 482400 cache.go:162] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/pause_3.2
I0915 19:11:53.201237 482400 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
I0915 19:11:53.201272 482400 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
I0915 19:11:53.201285 482400 cache.go:206] Successfully downloaded all kic artifacts
I0915 19:11:53.201324 482400 start.go:313] acquiring machines lock for stopped-upgrade-20210915191040-306161: {Name:mk770135214cd09dfecc39b319fa3976c796555c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:11:53.201452 482400 start.go:317] acquired machines lock for "stopped-upgrade-20210915191040-306161" in 108.064µs
I0915 19:11:53.201473 482400 start.go:93] Skipping create...Using existing machine configuration
I0915 19:11:53.201495 482400 fix.go:55] fixHost starting: m01
I0915 19:11:53.201785 482400 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210915191040-306161 --format={{.State.Status}}
I0915 19:11:53.211534 482400 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
I0915 19:11:53.211588 482400 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 66.845869ms
I0915 19:11:53.211607 482400 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
I0915 19:11:53.253809 482400 fix.go:108] recreateIfNeeded on stopped-upgrade-20210915191040-306161: state=Stopped err=<nil>
W0915 19:11:53.253847 482400 fix.go:134] unexpected machine state, will restart: <nil>
I0915 19:11:49.488824 474180 out.go:204] - Booting up control plane ...
I0915 19:11:53.257125 482400 out.go:177] * Restarting existing docker container for "stopped-upgrade-20210915191040-306161" ...
I0915 19:11:53.257211 482400 cli_runner.go:115] Run: docker start stopped-upgrade-20210915191040-306161
I0915 19:11:53.496605 482400 cache.go:162] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0
I0915 19:11:53.500560 482400 cache.go:162] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7
I0915 19:11:53.535700 482400 cache.go:162] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0
I0915 19:11:53.536284 482400 cache.go:162] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0
I0915 19:11:53.557757 482400 cache.go:162] opening: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0
I0915 19:11:54.128303 482400 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210915191040-306161 --format={{.State.Status}}
I0915 19:11:54.130639 482400 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
I0915 19:11:54.130690 482400 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 985.862151ms
I0915 19:11:54.130710 482400 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
I0915 19:11:54.203935 482400 kic.go:420] container "stopped-upgrade-20210915191040-306161" state is running.
I0915 19:11:54.204348 482400 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210915191040-306161
I0915 19:11:54.231636 482400 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
I0915 19:11:54.231688 482400 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 1.086191074s
I0915 19:11:54.231705 482400 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
I0915 19:11:54.259842 482400 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/stopped-upgrade-20210915191040-306161/config.json ...
I0915 19:11:54.260071 482400 machine.go:88] provisioning docker machine ...
I0915 19:11:54.260096 482400 ubuntu.go:169] provisioning hostname "stopped-upgrade-20210915191040-306161"
I0915 19:11:54.260150 482400 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210915191040-306161
I0915 19:11:54.325806 482400 main.go:130] libmachine: Using SSH client type: native
I0915 19:11:54.326024 482400 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 127.0.0.1 33139 <nil> <nil>}
I0915 19:11:54.326047 482400 main.go:130] libmachine: About to run SSH command:
sudo hostname stopped-upgrade-20210915191040-306161 && echo "stopped-upgrade-20210915191040-306161" | sudo tee /etc/hostname
I0915 19:11:54.326604 482400 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42638->127.0.0.1:33139: read: connection reset by peer
I0915 19:11:54.552996 482400 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
I0915 19:11:54.553044 482400 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 1.408077067s
I0915 19:11:54.553063 482400 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
I0915 19:11:54.595267 482400 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
I0915 19:11:54.595323 482400 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 1.450067048s
I0915 19:11:54.595341 482400 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
I0915 19:11:55.264209 482400 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
I0915 19:11:55.264257 482400 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 2.118618441s
I0915 19:11:55.264270 482400 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
I0915 19:11:55.264296 482400 cache.go:88] Successfully saved all images to host disk.
I0915 19:11:57.328827 482400 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42718->127.0.0.1:33139: read: connection reset by peer
*
* ==> Docker <==
* -- Logs begin at Wed 2021-09-15 19:10:43 UTC, end at Wed 2021-09-15 19:12:01 UTC. --
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.748712844Z" level=info msg="Starting up"
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.750726725Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.750759815Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.750780766Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.750789481Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.752119284Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.752145616Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.752162703Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.752175184Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.785563417Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.554871470Z" level=warning msg="Your kernel does not support swap memory limit"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.554906136Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.555074626Z" level=info msg="Loading containers: start."
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.683507911Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.723475583Z" level=info msg="Loading containers: done."
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.769826862Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.769898162Z" level=info msg="Daemon has completed initialization"
Sep 15 19:11:09 pause-20210915191040-306161 systemd[1]: Started Docker Application Container Engine.
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.795475313Z" level=info msg="API listen on [::]:2376"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.801062171Z" level=info msg="API listen on /var/run/docker.sock"
Sep 15 19:11:39 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:39.846858888Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Sep 15 19:11:39 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:39.912853399Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Sep 15 19:11:46 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:46.010009521Z" level=info msg="ignoring event" container=911ed5b73e6164b00cf21e7a002849689816a87dee31cfd456db3785edf19049 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 19:11:49 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:49.461607020Z" level=info msg="ignoring event" container=6b92c509ec02896125a066229c0d5f6d8bfe46bc786dae6c3721a2946143e297 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 19:11:57 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:57.959519722Z" level=error msg="Handler for POST /v1.41/containers/05fa6730f481/pause returned error: Cannot pause container 05fa6730f48169d2ac729d988cd9323576ee6eec8a609c586992b637f8c4d022: OCI runtime pause failed: unable to freeze: unknown"
*
* ==> container status <==
* time="2021-09-15T19:12:03Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bba215ebb412 6e38f40d628d "/storage-provisioner" 12 seconds ago Up 12 seconds (Paused) k8s_storage-provisioner_storage-provisioner_kube-system_6ec6737e-98cf-42bd-a911-9c89db377921_0
34c35294796d k8s.gcr.io/pause:3.5 "/pause" 12 seconds ago Up 12 seconds (Paused) k8s_POD_storage-provisioner_kube-system_6ec6737e-98cf-42bd-a911-9c89db377921_0
3dcc70b67afa 8d147537fb7d "/coredns -conf /etc…" 24 seconds ago Up 23 seconds (Paused) k8s_coredns_coredns-78fcd69978-wgdfl_kube-system_1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83_0
d15f2b587bbf 36c4ebbc9d97 "/usr/local/bin/kube…" 25 seconds ago Up 24 seconds (Paused) k8s_kube-proxy_kube-proxy-m4kfv_kube-system_136bb615-a920-412f-b4ce-fdeeb75acfcb_0
6b92c509ec02 k8s.gcr.io/pause:3.5 "/pause" 25 seconds ago Exited (0) 14 seconds ago k8s_POD_coredns-78fcd69978-fvrsb_kube-system_595bf515-bff7-4331-9552-89a5ba57ca6b_0
8dd7be01f740 k8s.gcr.io/pause:3.5 "/pause" 25 seconds ago Up 24 seconds (Paused) k8s_POD_coredns-78fcd69978-wgdfl_kube-system_1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83_0
74da63129b6f k8s.gcr.io/pause:3.5 "/pause" 25 seconds ago Up 25 seconds (Paused) k8s_POD_kube-proxy-m4kfv_kube-system_136bb615-a920-412f-b4ce-fdeeb75acfcb_0
a5f849b43b1c 6e002eb89a88 "kube-controller-man…" 46 seconds ago Up 45 seconds (Paused) k8s_kube-controller-manager_kube-controller-manager-pause-20210915191040-306161_kube-system_7de1da23176b119dfecc95ecb61dd307_0
b2b6bdeb15b9 aca5ededae9c "kube-scheduler --au…" 46 seconds ago Up 45 seconds (Paused) k8s_kube-scheduler_kube-scheduler-pause-20210915191040-306161_kube-system_1a018d46efe067cf543c61ee8873be83_0
27693cb21ee8 f30469a2491a "kube-apiserver --ad…" 46 seconds ago Up 45 seconds (Paused) k8s_kube-apiserver_kube-apiserver-pause-20210915191040-306161_kube-system_b43c2d1987c2753ab3b743454432ec65_0
05fa6730f481 004811815584 "etcd --advertise-cl…" 46 seconds ago Up 45 seconds k8s_etcd_etcd-pause-20210915191040-306161_kube-system_87e207cf7ed458dc963e57311bee91a6_0
cd525100a832 k8s.gcr.io/pause:3.5 "/pause" 46 seconds ago Up 46 seconds (Paused) k8s_POD_kube-scheduler-pause-20210915191040-306161_kube-system_1a018d46efe067cf543c61ee8873be83_0
5da3019bf389 k8s.gcr.io/pause:3.5 "/pause" 46 seconds ago Up 46 seconds (Paused) k8s_POD_kube-controller-manager-pause-20210915191040-306161_kube-system_7de1da23176b119dfecc95ecb61dd307_0
2b95e975790b k8s.gcr.io/pause:3.5 "/pause" 46 seconds ago Up 46 seconds (Paused) k8s_POD_kube-apiserver-pause-20210915191040-306161_kube-system_b43c2d1987c2753ab3b743454432ec65_0
ce786a1a3e58 k8s.gcr.io/pause:3.5 "/pause" 46 seconds ago Up 46 seconds (Paused) k8s_POD_etcd-pause-20210915191040-306161_kube-system_87e207cf7ed458dc963e57311bee91a6_0
*
* ==> coredns [3dcc70b67afa] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +17.772866] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 18:54] cgroup: cgroup2: unknown option "nsdelegate"
[ +6.702807] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth8705e930
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 12 69 6e a5 f0 15 08 06 .......in.....
[Sep15 18:55] cgroup: cgroup2: unknown option "nsdelegate"
[ +16.873151] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethfb23be6d
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff aa c8 da 40 c8 b6 08 06 .........@....
[ +1.199924] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth7db36ce7
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff ce 61 5a cc 64 d4 08 06 .......aZ.d...
[ +13.998812] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 18:56] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethd9ced9de
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 76 3d db ca 7d c7 08 06 ......v=..}...
[ +4.397740] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:06] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:08] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:09] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:10] cgroup: cgroup2: unknown option "nsdelegate"
[ +10.362374] cgroup: cgroup2: unknown option "nsdelegate"
[ +0.511989] cgroup: cgroup2: unknown option "nsdelegate"
[ +0.528368] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:11] cgroup: cgroup2: unknown option "nsdelegate"
[ +26.565621] cgroup: cgroup2: unknown option "nsdelegate"
[ +24.216665] cgroup: cgroup2: unknown option "nsdelegate"
[ +0.000652] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:12] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [05fa6730f481] <==
* {"level":"warn","ts":"2021-09-15T19:11:57.608Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"205.990111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-20210915191040-306161\" ","response":"","error":"context canceled"}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"info","ts":"2021-09-15T19:11:57.609Z","caller":"traceutil/trace.go:171","msg":"trace[334900010] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-20210915191040-306161; range_end:; }","duration":"206.246114ms","start":"2021-09-15T19:11:57.402Z","end":"2021-09-15T19:11:57.609Z","steps":["trace[334900010] 'agreement among raft nodes before linearized reading' (duration: 205.986899ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T19:11:57.608Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"209.587999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-20210915191040-306161\" ","response":"","error":"context canceled"}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"info","ts":"2021-09-15T19:11:57.609Z","caller":"traceutil/trace.go:171","msg":"trace[1188001142] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-20210915191040-306161; range_end:; }","duration":"209.88735ms","start":"2021-09-15T19:11:57.399Z","end":"2021-09-15T19:11:57.609Z","steps":["trace[1188001142] 'agreement among raft nodes before linearized reading' (duration: 209.585643ms)"],"step_count":1}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2021-09-15T19:11:57.609Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"205.22734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-20210915191040-306161\" ","response":"","error":"context canceled"}
{"level":"info","ts":"2021-09-15T19:11:57.609Z","caller":"traceutil/trace.go:171","msg":"trace[1804048700] range","detail":"{range_begin:/registry/minions/pause-20210915191040-306161; range_end:; }","duration":"205.267382ms","start":"2021-09-15T19:11:57.404Z","end":"2021-09-15T19:11:57.609Z","steps":["trace[1804048700] 'agreement among raft nodes before linearized reading' (duration: 205.22617ms)"],"step_count":1}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2021-09-15T19:11:57.609Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"211.41271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-m4kfv\" ","response":"","error":"context canceled"}
{"level":"info","ts":"2021-09-15T19:11:57.609Z","caller":"traceutil/trace.go:171","msg":"trace[72393194] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-m4kfv; range_end:; }","duration":"211.45198ms","start":"2021-09-15T19:11:57.398Z","end":"2021-09-15T19:11:57.609Z","steps":["trace[72393194] 'agreement among raft nodes before linearized reading' (duration: 211.424329ms)"],"step_count":1}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2021-09-15T19:11:57.899Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:11:58.400Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:11:58.900Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:11:59.401Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:11:59.902Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:12:00.385Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"2.975183355s","expected-duration":"1s"}
{"level":"info","ts":"2021-09-15T19:12:00.386Z","caller":"traceutil/trace.go:171","msg":"trace[649689859] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"2.985361187s","start":"2021-09-15T19:11:57.401Z","end":"2021-09-15T19:12:00.386Z","steps":["trace[649689859] 'process raft request' (duration: 2.985215108s)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T19:12:00.386Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T19:11:57.401Z","time spent":"2.985466989s","remote":"127.0.0.1:49570","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1114,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:491 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1041 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"info","ts":"2021-09-15T19:12:00.387Z","caller":"traceutil/trace.go:171","msg":"trace[1681736244] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:515; }","duration":"2.988488142s","start":"2021-09-15T19:11:57.398Z","end":"2021-09-15T19:12:00.386Z","steps":["trace[1681736244] 'read index received' (duration: 2.986831523s)","trace[1681736244] 'applied index is now lower than readState.Index' (duration: 1.655849ms)"],"step_count":2}
*
* ==> kernel <==
* 19:12:14 up 4:51, 0 users, load average: 6.71, 3.23, 1.78
Linux pause-20210915191040-306161 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [27693cb21ee8] <==
* I0915 19:11:57.612216 1 trace.go:205] Trace[90908807]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:4e96f995-20f4-46d9-9e91-48a3165f9dbc,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 19:11:54.869) (total time: 2742ms):
Trace[90908807]: [2.742216687s] [2.742216687s] END
E0915 19:11:57.612302 1 timeout.go:135] post-timeout activity - time-elapsed: 3.47907ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
E0915 19:11:57.613288 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.618821 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.620020 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
E0915 19:11:57.621075 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.624382 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.634372 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0915 19:11:57.635509 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0915 19:11:57.636607 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.637770 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0915 19:11:57.638916 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0915 19:11:57.640203 1 timeout.go:135] post-timeout activity - time-elapsed: 31.371334ms, POST "/api/v1/namespaces/default/events" result: <nil>
I0915 19:11:57.641167 1 trace.go:205] Trace[1143014761]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:bb3c2312-8930-436b-870a-91869178ba4a,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 19:11:54.844) (total time: 2796ms):
Trace[1143014761]: [2.796835127s] [2.796835127s] END
E0915 19:11:57.641259 1 timeout.go:135] post-timeout activity - time-elapsed: 32.032015ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
E0915 19:11:57.642472 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
I0915 19:11:57.643605 1 trace.go:205] Trace[137628608]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:0e2bebaf-ce34-4283-89d7-b9b51c771f61,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 19:11:56.470) (total time: 1173ms):
Trace[137628608]: [1.173147509s] [1.173147509s] END
E0915 19:11:57.643697 1 timeout.go:135] post-timeout activity - time-elapsed: 34.342272ms, POST "/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token" result: <nil>
I0915 19:11:57.644701 1 trace.go:205] Trace[840664150]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:8be84463-647a-4c66-8e45-50f775e917ef,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 19:11:56.069) (total time: 1574ms):
Trace[840664150]: [1.574755248s] [1.574755248s] END
E0915 19:11:57.644778 1 timeout.go:135] post-timeout activity - time-elapsed: 35.255523ms, POST "/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token" result: <nil>
E0915 19:11:57.648324 1 timeout.go:135] post-timeout activity - time-elapsed: 38.851875ms, GET "/api/v1/nodes/pause-20210915191040-306161" result: <nil>
*
* ==> kube-controller-manager [a5f849b43b1c] <==
* I0915 19:11:37.411967 1 shared_informer.go:247] Caches are synced for deployment
I0915 19:11:37.414182 1 shared_informer.go:247] Caches are synced for PVC protection
I0915 19:11:37.420105 1 shared_informer.go:247] Caches are synced for job
I0915 19:11:37.425007 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0915 19:11:37.438253 1 shared_informer.go:247] Caches are synced for cronjob
I0915 19:11:37.438264 1 shared_informer.go:247] Caches are synced for HPA
I0915 19:11:37.438289 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0915 19:11:37.438302 1 shared_informer.go:247] Caches are synced for attach detach
I0915 19:11:37.438587 1 shared_informer.go:247] Caches are synced for service account
I0915 19:11:37.438657 1 shared_informer.go:247] Caches are synced for expand
I0915 19:11:37.462666 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0915 19:11:37.523114 1 shared_informer.go:247] Caches are synced for resource quota
I0915 19:11:37.530198 1 shared_informer.go:247] Caches are synced for daemon sets
I0915 19:11:37.538172 1 shared_informer.go:247] Caches are synced for resource quota
I0915 19:11:37.544233 1 shared_informer.go:247] Caches are synced for stateful set
I0915 19:11:37.601354 1 shared_informer.go:247] Caches are synced for persistent volume
I0915 19:11:38.046450 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 19:11:38.110631 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 19:11:38.110664 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0915 19:11:38.127841 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I0915 19:11:38.332278 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-m4kfv"
I0915 19:11:38.429572 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-wgdfl"
I0915 19:11:38.437471 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-fvrsb"
I0915 19:11:38.578331 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
I0915 19:11:38.582612 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-fvrsb"
*
* ==> kube-proxy [d15f2b587bbf] <==
* I0915 19:11:39.363141 1 node.go:172] Successfully retrieved node IP: 192.168.58.2
I0915 19:11:39.363213 1 server_others.go:140] Detected node IP 192.168.58.2
W0915 19:11:39.363234 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I0915 19:11:39.600471 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0915 19:11:39.600519 1 server_others.go:212] Using iptables Proxier.
I0915 19:11:39.600534 1 server_others.go:219] creating dualStackProxier for iptables.
W0915 19:11:39.600550 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0915 19:11:39.600893 1 server.go:649] Version: v1.22.1
I0915 19:11:39.601741 1 config.go:315] Starting service config controller
I0915 19:11:39.601782 1 shared_informer.go:240] Waiting for caches to sync for service config
I0915 19:11:39.601802 1 config.go:224] Starting endpoint slice config controller
I0915 19:11:39.601807 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
E0915 19:11:39.608834 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-20210915191040-306161.16a5149bd9fff3a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048af3ae3db7be0, ext:474764259, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-pause-20210915191040-306161", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause
-20210915191040-306161", UID:"pause-20210915191040-306161", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "pause-20210915191040-306161.16a5149bd9fff3a7" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0915 19:11:39.703082 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0915 19:11:39.703101 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [b2b6bdeb15b9] <==
* E0915 19:11:22.694638 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0915 19:11:22.702299 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0915 19:11:22.702435 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0915 19:11:22.702528 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:22.702603 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0915 19:11:22.702702 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 19:11:22.702774 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0915 19:11:22.702846 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:22.702958 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 19:11:22.703036 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 19:11:22.703110 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:22.703205 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0915 19:11:22.703278 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:22.703421 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0915 19:11:22.703439 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0915 19:11:23.563383 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0915 19:11:23.568607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 19:11:23.616199 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:23.625013 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:23.682990 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0915 19:11:23.688603 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 19:11:23.721201 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0915 19:11:23.744652 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 19:11:23.777627 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
I0915 19:11:26.574973 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Wed 2021-09-15 19:10:43 UTC, end at Wed 2021-09-15 19:12:14 UTC. --
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895407 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/87e207cf7ed458dc963e57311bee91a6-etcd-certs\") pod \"etcd-pause-20210915191040-306161\" (UID: \"87e207cf7ed458dc963e57311bee91a6\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895438 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43c2d1987c2753ab3b743454432ec65-usr-share-ca-certificates\") pod \"kube-apiserver-pause-20210915191040-306161\" (UID: \"b43c2d1987c2753ab3b743454432ec65\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895463 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7de1da23176b119dfecc95ecb61dd307-etc-ca-certificates\") pod \"kube-controller-manager-pause-20210915191040-306161\" (UID: \"7de1da23176b119dfecc95ecb61dd307\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895490 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7de1da23176b119dfecc95ecb61dd307-kubeconfig\") pod \"kube-controller-manager-pause-20210915191040-306161\" (UID: \"7de1da23176b119dfecc95ecb61dd307\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895515 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/87e207cf7ed458dc963e57311bee91a6-etcd-data\") pod \"etcd-pause-20210915191040-306161\" (UID: \"87e207cf7ed458dc963e57311bee91a6\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895541 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43c2d1987c2753ab3b743454432ec65-etc-ca-certificates\") pod \"kube-apiserver-pause-20210915191040-306161\" (UID: \"b43c2d1987c2753ab3b743454432ec65\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895566 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b43c2d1987c2753ab3b743454432ec65-k8s-certs\") pod \"kube-apiserver-pause-20210915191040-306161\" (UID: \"b43c2d1987c2753ab3b743454432ec65\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895594 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43c2d1987c2753ab3b743454432ec65-usr-local-share-ca-certificates\") pod \"kube-apiserver-pause-20210915191040-306161\" (UID: \"b43c2d1987c2753ab3b743454432ec65\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.468719 4310 apiserver.go:52] "Watching apiserver"
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.472864 4310 topology_manager.go:200] "Topology Admit Handler"
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.473029 4310 topology_manager.go:200] "Topology Admit Handler"
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.473090 4310 topology_manager.go:200] "Topology Admit Handler"
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499643 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xmq\" (UniqueName: \"kubernetes.io/projected/6ec6737e-98cf-42bd-a911-9c89db377921-kube-api-access-r9xmq\") pod \"storage-provisioner\" (UID: \"6ec6737e-98cf-42bd-a911-9c89db377921\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499775 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/136bb615-a920-412f-b4ce-fdeeb75acfcb-xtables-lock\") pod \"kube-proxy-m4kfv\" (UID: \"136bb615-a920-412f-b4ce-fdeeb75acfcb\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499812 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6ec6737e-98cf-42bd-a911-9c89db377921-tmp\") pod \"storage-provisioner\" (UID: \"6ec6737e-98cf-42bd-a911-9c89db377921\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499841 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/136bb615-a920-412f-b4ce-fdeeb75acfcb-kube-proxy\") pod \"kube-proxy-m4kfv\" (UID: \"136bb615-a920-412f-b4ce-fdeeb75acfcb\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499868 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh642\" (UniqueName: \"kubernetes.io/projected/136bb615-a920-412f-b4ce-fdeeb75acfcb-kube-api-access-sh642\") pod \"kube-proxy-m4kfv\" (UID: \"136bb615-a920-412f-b4ce-fdeeb75acfcb\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499900 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf5qs\" (UniqueName: \"kubernetes.io/projected/1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83-kube-api-access-tf5qs\") pod \"coredns-78fcd69978-wgdfl\" (UID: \"1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499929 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/136bb615-a920-412f-b4ce-fdeeb75acfcb-lib-modules\") pod \"kube-proxy-m4kfv\" (UID: \"136bb615-a920-412f-b4ce-fdeeb75acfcb\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499956 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83-config-volume\") pod \"coredns-78fcd69978-wgdfl\" (UID: \"1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499971 4310 reconciler.go:157] "Reconciler: start to sync state"
Sep 15 19:11:57 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:57.602522 4310 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
Sep 15 19:11:57 pause-20210915191040-306161 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Sep 15 19:11:57 pause-20210915191040-306161 systemd[1]: kubelet.service: Succeeded.
Sep 15 19:11:57 pause-20210915191040-306161 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
*
* ==> storage-provisioner [bba215ebb412] <==
* I0915 19:11:51.762405 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0915 19:11:51.771218 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0915 19:11:51.771251 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0915 19:11:51.787868 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0915 19:11:51.788000 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210915191040-306161_cd66c3a2-5e14-4c9a-b16d-648567d27e6b!
I0915 19:11:51.788045 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4cc154c3-2477-4781-8800-3bd0adce3c8a", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210915191040-306161_cd66c3a2-5e14-4c9a-b16d-648567d27e6b became leader
I0915 19:11:51.888127 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210915191040-306161_cd66c3a2-5e14-4c9a-b16d-648567d27e6b!
-- /stdout --
** stderr **
E0915 19:12:14.073177 484675 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect pause-20210915191040-306161
helpers_test.go:236: (dbg) docker inspect pause-20210915191040-306161:
-- stdout --
[
{
"Id": "bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061",
"Created": "2021-09-15T19:10:42.551354157Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 459889,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-09-15T19:10:43.169524869Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
"ResolvConfPath": "/var/lib/docker/containers/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061/hostname",
"HostsPath": "/var/lib/docker/containers/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061/hosts",
"LogPath": "/var/lib/docker/containers/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061/bca902ad6107c32b526bd51ea94e9df6f7607330dfc07e7e90b9556bf40fb061-json.log",
"Name": "/pause-20210915191040-306161",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20210915191040-306161:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20210915191040-306161",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [
{
"PathOnHost": "/dev/fuse",
"PathInContainer": "/dev/fuse",
"CgroupPermissions": "rwm"
}
],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/5e3ffb7c28acf9d38fbd1aa28f21ed029817d0ac81f88df792ac7af13d5d5d8e-init/diff:/var/lib/docker/overlay2/42af7561be1bbb04e60d5898b29ceee8cccacce1c1e5ac5a5471bd2c2eb199c1/diff:/var/lib/docker/overlay2/751fc28b4872d1d49285c85a1cc36b03020834feb891b324f4934fc7bbb2a6ad/diff:/var/lib/docker/overlay2/0ecc58bf735cb0b519ef22a62daf0e530d708564e0b5e2a32c0d2dfbf82c31f1/diff:/var/lib/docker/overlay2/40f6c03795cd703755126d403a243ddfa2b5a61c04ba64a54188be2d675512f0/diff:/var/lib/docker/overlay2/c1ed82aa36dccda427851e6e631dda57f4ea197ec647ed726ad163b6ab37743f/diff:/var/lib/docker/overlay2/80d741d1e18c6360d99304165056959acb58e58f7c7d3b65d8d53565555b4030/diff:/var/lib/docker/overlay2/447eff9d25dbd151310c8d31d34423f404bc92869252861a8515d52575670208/diff:/var/lib/docker/overlay2/c0772139dfb4023aacb541a406a5e5d1697bd07fb74faf19f0f70c0378a6a0aa/diff:/var/lib/docker/overlay2/b6bb35ea8704143c911f2f148b8a83fb1595e56bdb785ad83f7f5b0ba255050b/diff:/var/lib/docker/overlay2/77cfc0
f9841fd241edda977799472f43ff62c670f7f0dc94936630ed52e1b91e/diff:/var/lib/docker/overlay2/47450527f7ce6a152d348dbe5dac4026cc11f3f715cf7e7b7ea7e797df3f5cf4/diff:/var/lib/docker/overlay2/5ea3e2569f810895df6596566930f3068e5c5d42831d6bda8de83d19c1185d3f/diff:/var/lib/docker/overlay2/d116c54fdfbaac2c30c98b1317c22640c46f07f3f9924da3e450e7cc3f026f3b/diff:/var/lib/docker/overlay2/4f1d2222e5e679ccc39dcf8be0e0caad5028cf91c0cc32b4025b79da0530cf79/diff:/var/lib/docker/overlay2/d395fbc24d899f00e3584b6298d4b2d46f9c834636fa54bf82aec3d8bf7aa1e4/diff:/var/lib/docker/overlay2/a294a18c3929a26e58691a4f489d1d4ed51d12a1c252dc68fdee0b9bddededd1/diff:/var/lib/docker/overlay2/8bba47552514f0de9059dcaf0fac98e702505d70e78a52d0394a82f5fc85fa37/diff:/var/lib/docker/overlay2/020c4987317db46a4f9cdc01cf984f014ead8756b3e1c340fb8f181a80339c1c/diff:/var/lib/docker/overlay2/a9def08cecacaa841f943f37f2b4027d2b8c9c6b2a0e55d4844d2c257eddd0e6/diff:/var/lib/docker/overlay2/17ef9dd7c4c56b129b7bcdf2629184e963776cba2a5484bd18ca3102b68d02cc/diff:/var/lib/d
ocker/overlay2/8d91d9e3464303a35c2d8a3a3f93f0ab30cd9fe6fa28c942ddc92769cf590ac9/diff:/var/lib/docker/overlay2/9830b73f3f06b9fc4f470e3b1dc16b8a3299272056c5131a298e6bb42a083941/diff:/var/lib/docker/overlay2/ea9d596d994ea5dfd5a26427720cb2e8361d797aa279267428df94c19a9d7c44/diff:/var/lib/docker/overlay2/2e1466e09da2855871ebe76fbcffcb15fd134652525feb0105fe152a084d210e/diff:/var/lib/docker/overlay2/9bdc54e7df3f3e5c2597a98b2737c550e37e1f78d16859fb6518d8b440ca0b08/diff:/var/lib/docker/overlay2/5037f238f4a3fe4dab6cd163114a4e1ff421eac25c08e1692759c11df7b3ee16/diff:/var/lib/docker/overlay2/873666dc974cc73345c5bdf7408c32b17137d25cc199834ebd8660d05c9481e2/diff:/var/lib/docker/overlay2/139b02d97d4069f11d6bc3503baa4785c0a78b8bd7bc9cb27acaf894f3107aa9/diff:/var/lib/docker/overlay2/6e6fd6abdb501264791d7c049ecc9089ab221a7aa6f4cbb745874fefd93cf871/diff:/var/lib/docker/overlay2/11575ed178ccca3cb09dfeb7628b08e11e6a3357f3659c2890a288ce9f2ba782/diff:/var/lib/docker/overlay2/bb3435980235e05ccce2d74ef847a0acd46c4edf0d38983388b271aa9dd
b6053/diff:/var/lib/docker/overlay2/0e3e4464369c9a27193d7ad87e67c5485363aeabca8fcbc6855eab5789e4c2a5/diff:/var/lib/docker/overlay2/41e71fdf4be3041a17553878254d63ea1c8c4063994bf2725203012ee9dd93c5/diff:/var/lib/docker/overlay2/76ee62595a4ab520c3233836c68c0bc0297b3907fff6db1b659f16f4325b8d3c/diff:/var/lib/docker/overlay2/dc211f5ed531b1debbcdd38dd7129b846d5e959af10f3144da2541b5c6f0233b/diff:/var/lib/docker/overlay2/6822a79442f86fb63c1067f542f9f6ce9288d3cec3ed6330fce4a6e3f48db77e/diff:/var/lib/docker/overlay2/757fc4ce96317e0c241e2344c927a20f4844398da422284baf7de7aebb6f828f/diff:/var/lib/docker/overlay2/3672ef17e397e206ae409a53d2bca46c162142ff01c54c787ba59f0bf2dad58a/diff:/var/lib/docker/overlay2/e04f51cc901a2cc97f893a3d4c6ebb453df442a6d57974d1b702d6ad1efab159/diff",
"MergedDir": "/var/lib/docker/overlay2/5e3ffb7c28acf9d38fbd1aa28f21ed029817d0ac81f88df792ac7af13d5d5d8e/merged",
"UpperDir": "/var/lib/docker/overlay2/5e3ffb7c28acf9d38fbd1aa28f21ed029817d0ac81f88df792ac7af13d5d5d8e/diff",
"WorkDir": "/var/lib/docker/overlay2/5e3ffb7c28acf9d38fbd1aa28f21ed029817d0ac81f88df792ac7af13d5d5d8e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "pause-20210915191040-306161",
"Source": "/var/lib/docker/volumes/pause-20210915191040-306161/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "pause-20210915191040-306161",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20210915191040-306161",
"name.minikube.sigs.k8s.io": "pause-20210915191040-306161",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "df07212579862d4f5ae99ffb739ae3c99da6a0412ff03dd0d6816a60189e779d",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33118"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33117"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33112"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33116"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33114"
}
]
},
"SandboxKey": "/var/run/docker/netns/df0721257986",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20210915191040-306161": {
"IPAMConfig": {
"IPv4Address": "192.168.58.2"
},
"Links": null,
"Aliases": [
"bca902ad6107"
],
"NetworkID": "515ea92b2bf138022bfb99d0055b7e5e76b1d54cb7cfb53912aad56a462f4219",
"EndpointID": "a897f77a4ed3bef902f3cd13240fca7c0c0e799541ca81af9ad32dd3573eec4d",
"Gateway": "192.168.58.1",
"IPAddress": "192.168.58.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:3a:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210915191040-306161 -n pause-20210915191040-306161
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210915191040-306161 -n pause-20210915191040-306161: exit status 2 (443.569974ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p pause-20210915191040-306161 logs -n 25
=== CONT TestPause/serial/PauseAgain
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210915191040-306161 logs -n 25: exit status 110 (13.211185347s)
-- stdout --
*
* ==> Audit <==
* |------------|--------------------------------------------|--------------------------------------------|----------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|------------|--------------------------------------------|--------------------------------------------|----------|---------|-------------------------------|-------------------------------|
| ssh | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:07:32 UTC | Wed, 15 Sep 2021 19:07:35 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| | -- docker pull busybox | | | | | |
| start | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:07:35 UTC | Wed, 15 Sep 2021 19:08:08 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --wait=true --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.17.3 | | | | | |
| ssh | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:08 UTC | Wed, 15 Sep 2021 19:08:09 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| | -- docker images | | | | | |
| delete | -p | test-preload-20210915190619-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:09 UTC | Wed, 15 Sep 2021 19:08:14 UTC |
| | test-preload-20210915190619-306161 | | | | | |
| start | -p | scheduled-stop-20210915190814-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:14 UTC | Wed, 15 Sep 2021 19:08:36 UTC |
| | scheduled-stop-20210915190814-306161 | | | | | |
| | --memory=2048 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| stop | -p | scheduled-stop-20210915190814-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:36 UTC | Wed, 15 Sep 2021 19:08:36 UTC |
| | scheduled-stop-20210915190814-306161 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20210915190814-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:08:54 UTC | Wed, 15 Sep 2021 19:09:10 UTC |
| | scheduled-stop-20210915190814-306161 | | | | | |
| | --schedule 5s | | | | | |
| delete | -p | scheduled-stop-20210915190814-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:09:19 UTC | Wed, 15 Sep 2021 19:09:21 UTC |
| | scheduled-stop-20210915190814-306161 | | | | | |
| start | -p | skaffold-20210915190921-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:09:22 UTC | Wed, 15 Sep 2021 19:09:43 UTC |
| | skaffold-20210915190921-306161 | | | | | |
| | --memory=2600 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| docker-env | --shell none -p | skaffold-20210915190921-306161 | skaffold | v1.23.0 | Wed, 15 Sep 2021 19:09:44 UTC | Wed, 15 Sep 2021 19:09:44 UTC |
| | skaffold-20210915190921-306161 | | | | | |
| | --user=skaffold | | | | | |
| delete | -p | skaffold-20210915190921-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:27 UTC | Wed, 15 Sep 2021 19:10:30 UTC |
| | skaffold-20210915190921-306161 | | | | | |
| delete | -p | insufficient-storage-20210915191030-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:38 UTC | Wed, 15 Sep 2021 19:10:40 UTC |
| | insufficient-storage-20210915191030-306161 | | | | | |
| delete | -p | flannel-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:40 UTC | Wed, 15 Sep 2021 19:10:40 UTC |
| | flannel-20210915191040-306161 | | | | | |
| start | -p | force-systemd-flag-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:40 UTC | Wed, 15 Sep 2021 19:11:27 UTC |
| | force-systemd-flag-20210915191040-306161 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| -p | force-systemd-flag-20210915191040-306161 | force-systemd-flag-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:27 UTC | Wed, 15 Sep 2021 19:11:28 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | force-systemd-flag-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:28 UTC | Wed, 15 Sep 2021 19:11:30 UTC |
| | force-systemd-flag-20210915191040-306161 | | | | | |
| start | -p | offline-docker-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:40 UTC | Wed, 15 Sep 2021 19:11:42 UTC |
| | offline-docker-20210915191040-306161 | | | | | |
| | --alsologtostderr -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p pause-20210915191040-306161 | pause-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:10:40 UTC | Wed, 15 Sep 2021 19:11:42 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | offline-docker-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:42 UTC | Wed, 15 Sep 2021 19:11:50 UTC |
| | offline-docker-20210915191040-306161 | | | | | |
| start | -p pause-20210915191040-306161 | pause-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:43 UTC | Wed, 15 Sep 2021 19:11:52 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| pause | -p pause-20210915191040-306161 | pause-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:52 UTC | Wed, 15 Sep 2021 19:11:52 UTC |
| | --alsologtostderr -v=5 | | | | | |
| unpause | -p pause-20210915191040-306161 | pause-20210915191040-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:53 UTC | Wed, 15 Sep 2021 19:11:55 UTC |
| | --alsologtostderr -v=5 | | | | | |
| start | -p | force-systemd-env-20210915191133-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:11:33 UTC | Wed, 15 Sep 2021 19:12:04 UTC |
| | force-systemd-env-20210915191133-306161 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| -p | force-systemd-env-20210915191133-306161 | force-systemd-env-20210915191133-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:12:04 UTC | Wed, 15 Sep 2021 19:12:05 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | force-systemd-env-20210915191133-306161 | jenkins | v1.23.0 | Wed, 15 Sep 2021 19:12:05 UTC | Wed, 15 Sep 2021 19:12:07 UTC |
| | force-systemd-env-20210915191133-306161 | | | | | |
|------------|--------------------------------------------|--------------------------------------------|----------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/09/15 19:12:08
Running on machine: debian-jenkins-agent-3
Binary: Built with gc go1.17 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0915 19:12:08.025776 487286 out.go:298] Setting OutFile to fd 1 ...
I0915 19:12:08.025870 487286 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 19:12:08.025874 487286 out.go:311] Setting ErrFile to fd 2...
I0915 19:12:08.025880 487286 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 19:12:08.026026 487286 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/bin
I0915 19:12:08.026358 487286 out.go:305] Setting JSON to false
I0915 19:12:08.070601 487286 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":17505,"bootTime":1631715623,"procs":254,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0915 19:12:08.070728 487286 start.go:121] virtualization: kvm guest
I0915 19:12:08.073355 487286 out.go:177] * [kubernetes-upgrade-20210915191207-306161] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
I0915 19:12:08.075064 487286 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/kubeconfig
I0915 19:12:08.073502 487286 notify.go:169] Checking for updates...
I0915 19:12:08.076702 487286 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0915 19:12:08.078352 487286 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube
I0915 19:12:08.079865 487286 out.go:177] - MINIKUBE_LOCATION=12425
I0915 19:12:08.080406 487286 config.go:177] Loaded profile config "missing-upgrade-20210915191150-306161": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
I0915 19:12:08.080528 487286 config.go:177] Loaded profile config "pause-20210915191040-306161": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0915 19:12:08.080620 487286 config.go:177] Loaded profile config "stopped-upgrade-20210915191040-306161": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
I0915 19:12:08.080670 487286 driver.go:343] Setting default libvirt URI to qemu:///system
I0915 19:12:08.137947 487286 docker.go:132] docker version: linux-19.03.15
I0915 19:12:08.138091 487286 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 19:12:08.237284 487286 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:199 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:67 SystemTime:2021-09-15 19:12:08.178091982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742192640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0915 19:12:08.237381 487286 docker.go:237] overlay module found
I0915 19:12:08.239770 487286 out.go:177] * Using the docker driver based on user configuration
I0915 19:12:08.239801 487286 start.go:278] selected driver: docker
I0915 19:12:08.239807 487286 start.go:751] validating driver "docker" against <nil>
I0915 19:12:08.239837 487286 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0915 19:12:08.239895 487286 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0915 19:12:08.239920 487286 out.go:242] ! Your cgroup does not allow setting memory.
I0915 19:12:08.241421 487286 out.go:177] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0915 19:12:08.242340 487286 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0915 19:12:08.339765 487286 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:199 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:71 SystemTime:2021-09-15 19:12:08.284891271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742192640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0915 19:12:08.339975 487286 start_flags.go:264] no existing cluster config was found, will generate one from the flags
I0915 19:12:08.340202 487286 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true]
I0915 19:12:08.340237 487286 cni.go:93] Creating CNI manager for ""
I0915 19:12:08.340246 487286 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0915 19:12:08.340255 487286 start_flags.go:278] config:
{Name:kubernetes-upgrade-20210915191207-306161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210915191207-306161 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 19:12:08.342705 487286 out.go:177] * Starting control plane node kubernetes-upgrade-20210915191207-306161 in cluster kubernetes-upgrade-20210915191207-306161
I0915 19:12:08.342754 487286 cache.go:118] Beginning downloading kic base image for docker with docker
I0915 19:12:08.344353 487286 out.go:177] * Pulling base image ...
I0915 19:12:08.344402 487286 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
I0915 19:12:08.344442 487286 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
I0915 19:12:08.344459 487286 cache.go:57] Caching tarball of preloaded images
I0915 19:12:08.344487 487286 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
I0915 19:12:08.344607 487286 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0915 19:12:08.344631 487286 cache.go:60] Finished verifying existence of preloaded tar for v1.14.0 on docker
I0915 19:12:08.344766 487286 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/kubernetes-upgrade-20210915191207-306161/config.json ...
I0915 19:12:08.344799 487286 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/kubernetes-upgrade-20210915191207-306161/config.json: {Name:mkeb777a02fe11607f3344dcdec3a99f83a463ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 19:12:08.452652 487286 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
I0915 19:12:08.452705 487286 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
I0915 19:12:08.452725 487286 cache.go:206] Successfully downloaded all kic artifacts
I0915 19:12:08.452801 487286 start.go:313] acquiring machines lock for kubernetes-upgrade-20210915191207-306161: {Name:mkdcde9b9510cd8c0953e2aab39569e6c028f446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 19:12:08.452979 487286 start.go:317] acquired machines lock for "kubernetes-upgrade-20210915191207-306161" in 150.4µs
I0915 19:12:08.453018 487286 start.go:89] Provisioning new machine with config: &{Name:kubernetes-upgrade-20210915191207-306161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210915191207-306161 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
I0915 19:12:08.453131 487286 start.go:126] createHost starting for "" (driver="docker")
I0915 19:12:08.324812 482400 cache_images.go:309] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0915 19:12:08.324859 482400 docker.go:206] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
I0915 19:12:08.324878 482400 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/metrics-scraper_v1.0.4 | docker load"
I0915 19:12:09.375311 482400 ssh_runner.go:192] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/metrics-scraper_v1.0.4 | docker load": (1.05041394s)
I0915 19:12:09.375340 482400 cache_images.go:309] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
I0915 19:12:09.375381 482400 docker.go:206] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
I0915 19:12:09.375429 482400 ssh_runner.go:152] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/dashboard_v2.1.0 | docker load"
I0915 19:12:08.456384 487286 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0915 19:12:08.456677 487286 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20210915191207-306161" (driver="docker")
I0915 19:12:08.456711 487286 client.go:168] LocalClient.Create starting
I0915 19:12:08.456807 487286 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca.pem
I0915 19:12:08.456838 487286 main.go:130] libmachine: Decoding PEM data...
I0915 19:12:08.456855 487286 main.go:130] libmachine: Parsing certificate...
I0915 19:12:08.457027 487286 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/cert.pem
I0915 19:12:08.457059 487286 main.go:130] libmachine: Decoding PEM data...
I0915 19:12:08.457073 487286 main.go:130] libmachine: Parsing certificate...
I0915 19:12:08.457489 487286 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210915191207-306161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0915 19:12:08.502597 487286 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210915191207-306161 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0915 19:12:08.502692 487286 network_create.go:255] running [docker network inspect kubernetes-upgrade-20210915191207-306161] to gather additional debugging logs...
I0915 19:12:08.502713 487286 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210915191207-306161
W0915 19:12:08.551490 487286 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210915191207-306161 returned with exit code 1
I0915 19:12:08.551535 487286 network_create.go:258] error running [docker network inspect kubernetes-upgrade-20210915191207-306161]: docker network inspect kubernetes-upgrade-20210915191207-306161: exit status 1
stdout:
[]
stderr:
Error: No such network: kubernetes-upgrade-20210915191207-306161
I0915 19:12:08.551553 487286 network_create.go:260] output of [docker network inspect kubernetes-upgrade-20210915191207-306161]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: kubernetes-upgrade-20210915191207-306161
** /stderr **
I0915 19:12:08.551613 487286 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 19:12:08.597483 487286 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006f8010] misses:0}
I0915 19:12:08.597544 487286 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0915 19:12:08.597565 487286 network_create.go:106] attempt to create docker network kubernetes-upgrade-20210915191207-306161 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0915 19:12:08.597626 487286 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20210915191207-306161
I0915 19:12:08.692042 487286 network_create.go:90] docker network kubernetes-upgrade-20210915191207-306161 192.168.49.0/24 created
I0915 19:12:08.692085 487286 kic.go:106] calculated static IP "192.168.49.2" for the "kubernetes-upgrade-20210915191207-306161" container
I0915 19:12:08.692152 487286 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0915 19:12:08.743176 487286 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20210915191207-306161 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210915191207-306161 --label created_by.minikube.sigs.k8s.io=true
I0915 19:12:08.795661 487286 oci.go:102] Successfully created a docker volume kubernetes-upgrade-20210915191207-306161
I0915 19:12:08.795773 487286 cli_runner.go:115] Run: docker run --rm --name kubernetes-upgrade-20210915191207-306161-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210915191207-306161 --entrypoint /usr/bin/test -v kubernetes-upgrade-20210915191207-306161:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib
I0915 19:12:09.634169 487286 oci.go:106] Successfully prepared a docker volume kubernetes-upgrade-20210915191207-306161
W0915 19:12:09.634234 487286 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0915 19:12:09.634245 487286 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0915 19:12:09.634306 487286 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
I0915 19:12:09.634349 487286 kic.go:179] Starting extracting preloaded images to volume ...
I0915 19:12:09.634362 487286 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0915 19:12:09.634418 487286 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20210915191207-306161:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir
I0915 19:12:09.728226 487286 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20210915191207-306161 --name kubernetes-upgrade-20210915191207-306161 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210915191207-306161 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20210915191207-306161 --network kubernetes-upgrade-20210915191207-306161 --ip 192.168.49.2 --volume kubernetes-upgrade-20210915191207-306161:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56
I0915 19:12:10.305853 487286 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210915191207-306161 --format={{.State.Running}}
I0915 19:12:10.361801 487286 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210915191207-306161 --format={{.State.Status}}
I0915 19:12:10.412716 487286 cli_runner.go:115] Run: docker exec kubernetes-upgrade-20210915191207-306161 stat /var/lib/dpkg/alternatives/iptables
I0915 19:12:10.561256 487286 oci.go:281] the created container "kubernetes-upgrade-20210915191207-306161" has a running status.
I0915 19:12:10.561290 487286 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/kubernetes-upgrade-20210915191207-306161/id_rsa...
I0915 19:12:10.901623 487286 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12425-302762-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/kubernetes-upgrade-20210915191207-306161/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0915 19:12:11.296134 487286 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210915191207-306161 --format={{.State.Status}}
I0915 19:12:11.347200 487286 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0915 19:12:11.347224 487286 kic_runner.go:115] Args: [docker exec --privileged kubernetes-upgrade-20210915191207-306161 chown docker:docker /home/docker/.ssh/authorized_keys]
*
* ==> Docker <==
* -- Logs begin at Wed 2021-09-15 19:10:43 UTC, end at Wed 2021-09-15 19:12:15 UTC. --
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.748712844Z" level=info msg="Starting up"
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.750726725Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.750759815Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.750780766Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.750789481Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.752119284Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.752145616Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.752162703Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.752175184Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Sep 15 19:10:54 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:10:54.785563417Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.554871470Z" level=warning msg="Your kernel does not support swap memory limit"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.554906136Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.555074626Z" level=info msg="Loading containers: start."
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.683507911Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.723475583Z" level=info msg="Loading containers: done."
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.769826862Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.769898162Z" level=info msg="Daemon has completed initialization"
Sep 15 19:11:09 pause-20210915191040-306161 systemd[1]: Started Docker Application Container Engine.
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.795475313Z" level=info msg="API listen on [::]:2376"
Sep 15 19:11:09 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:09.801062171Z" level=info msg="API listen on /var/run/docker.sock"
Sep 15 19:11:39 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:39.846858888Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Sep 15 19:11:39 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:39.912853399Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Sep 15 19:11:46 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:46.010009521Z" level=info msg="ignoring event" container=911ed5b73e6164b00cf21e7a002849689816a87dee31cfd456db3785edf19049 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 19:11:49 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:49.461607020Z" level=info msg="ignoring event" container=6b92c509ec02896125a066229c0d5f6d8bfe46bc786dae6c3721a2946143e297 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 19:11:57 pause-20210915191040-306161 dockerd[450]: time="2021-09-15T19:11:57.959519722Z" level=error msg="Handler for POST /v1.41/containers/05fa6730f481/pause returned error: Cannot pause container 05fa6730f48169d2ac729d988cd9323576ee6eec8a609c586992b637f8c4d022: OCI runtime pause failed: unable to freeze: unknown"
*
* ==> container status <==
* time="2021-09-15T19:12:17Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bba215ebb412 6e38f40d628d "/storage-provisioner" 26 seconds ago Up 25 seconds (Paused) k8s_storage-provisioner_storage-provisioner_kube-system_6ec6737e-98cf-42bd-a911-9c89db377921_0
34c35294796d k8s.gcr.io/pause:3.5 "/pause" 26 seconds ago Up 26 seconds (Paused) k8s_POD_storage-provisioner_kube-system_6ec6737e-98cf-42bd-a911-9c89db377921_0
3dcc70b67afa 8d147537fb7d "/coredns -conf /etc…" 38 seconds ago Up 37 seconds (Paused) k8s_coredns_coredns-78fcd69978-wgdfl_kube-system_1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83_0
d15f2b587bbf 36c4ebbc9d97 "/usr/local/bin/kube…" 39 seconds ago Up 38 seconds (Paused) k8s_kube-proxy_kube-proxy-m4kfv_kube-system_136bb615-a920-412f-b4ce-fdeeb75acfcb_0
6b92c509ec02 k8s.gcr.io/pause:3.5 "/pause" 39 seconds ago Exited (0) 28 seconds ago k8s_POD_coredns-78fcd69978-fvrsb_kube-system_595bf515-bff7-4331-9552-89a5ba57ca6b_0
8dd7be01f740 k8s.gcr.io/pause:3.5 "/pause" 39 seconds ago Up 37 seconds (Paused) k8s_POD_coredns-78fcd69978-wgdfl_kube-system_1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83_0
74da63129b6f k8s.gcr.io/pause:3.5 "/pause" 39 seconds ago Up 38 seconds (Paused) k8s_POD_kube-proxy-m4kfv_kube-system_136bb615-a920-412f-b4ce-fdeeb75acfcb_0
a5f849b43b1c 6e002eb89a88 "kube-controller-man…" About a minute ago Up 59 seconds (Paused) k8s_kube-controller-manager_kube-controller-manager-pause-20210915191040-306161_kube-system_7de1da23176b119dfecc95ecb61dd307_0
b2b6bdeb15b9 aca5ededae9c "kube-scheduler --au…" About a minute ago Up 59 seconds (Paused) k8s_kube-scheduler_kube-scheduler-pause-20210915191040-306161_kube-system_1a018d46efe067cf543c61ee8873be83_0
27693cb21ee8 f30469a2491a "kube-apiserver --ad…" About a minute ago Up 59 seconds (Paused) k8s_kube-apiserver_kube-apiserver-pause-20210915191040-306161_kube-system_b43c2d1987c2753ab3b743454432ec65_0
05fa6730f481 004811815584 "etcd --advertise-cl…" About a minute ago Up 59 seconds k8s_etcd_etcd-pause-20210915191040-306161_kube-system_87e207cf7ed458dc963e57311bee91a6_0
cd525100a832 k8s.gcr.io/pause:3.5 "/pause" About a minute ago Up 59 seconds (Paused) k8s_POD_kube-scheduler-pause-20210915191040-306161_kube-system_1a018d46efe067cf543c61ee8873be83_0
5da3019bf389 k8s.gcr.io/pause:3.5 "/pause" About a minute ago Up 59 seconds (Paused) k8s_POD_kube-controller-manager-pause-20210915191040-306161_kube-system_7de1da23176b119dfecc95ecb61dd307_0
2b95e975790b k8s.gcr.io/pause:3.5 "/pause" About a minute ago Up 59 seconds (Paused) k8s_POD_kube-apiserver-pause-20210915191040-306161_kube-system_b43c2d1987c2753ab3b743454432ec65_0
ce786a1a3e58 k8s.gcr.io/pause:3.5 "/pause" About a minute ago Up 59 seconds (Paused) k8s_POD_etcd-pause-20210915191040-306161_kube-system_87e207cf7ed458dc963e57311bee91a6_0
*
* ==> coredns [3dcc70b67afa] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +17.772866] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 18:54] cgroup: cgroup2: unknown option "nsdelegate"
[ +6.702807] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth8705e930
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 12 69 6e a5 f0 15 08 06 .......in.....
[Sep15 18:55] cgroup: cgroup2: unknown option "nsdelegate"
[ +16.873151] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethfb23be6d
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff aa c8 da 40 c8 b6 08 06 .........@....
[ +1.199924] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth7db36ce7
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff ce 61 5a cc 64 d4 08 06 .......aZ.d...
[ +13.998812] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 18:56] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethd9ced9de
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 76 3d db ca 7d c7 08 06 ......v=..}...
[ +4.397740] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:06] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:08] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:09] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:10] cgroup: cgroup2: unknown option "nsdelegate"
[ +10.362374] cgroup: cgroup2: unknown option "nsdelegate"
[ +0.511989] cgroup: cgroup2: unknown option "nsdelegate"
[ +0.528368] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:11] cgroup: cgroup2: unknown option "nsdelegate"
[ +26.565621] cgroup: cgroup2: unknown option "nsdelegate"
[ +24.216665] cgroup: cgroup2: unknown option "nsdelegate"
[ +0.000652] cgroup: cgroup2: unknown option "nsdelegate"
[Sep15 19:12] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [05fa6730f481] <==
* {"level":"warn","ts":"2021-09-15T19:11:57.608Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"205.990111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-20210915191040-306161\" ","response":"","error":"context canceled"}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"info","ts":"2021-09-15T19:11:57.609Z","caller":"traceutil/trace.go:171","msg":"trace[334900010] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-20210915191040-306161; range_end:; }","duration":"206.246114ms","start":"2021-09-15T19:11:57.402Z","end":"2021-09-15T19:11:57.609Z","steps":["trace[334900010] 'agreement among raft nodes before linearized reading' (duration: 205.986899ms)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T19:11:57.608Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"209.587999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-20210915191040-306161\" ","response":"","error":"context canceled"}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"info","ts":"2021-09-15T19:11:57.609Z","caller":"traceutil/trace.go:171","msg":"trace[1188001142] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-20210915191040-306161; range_end:; }","duration":"209.88735ms","start":"2021-09-15T19:11:57.399Z","end":"2021-09-15T19:11:57.609Z","steps":["trace[1188001142] 'agreement among raft nodes before linearized reading' (duration: 209.585643ms)"],"step_count":1}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2021-09-15T19:11:57.609Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"205.22734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-20210915191040-306161\" ","response":"","error":"context canceled"}
{"level":"info","ts":"2021-09-15T19:11:57.609Z","caller":"traceutil/trace.go:171","msg":"trace[1804048700] range","detail":"{range_begin:/registry/minions/pause-20210915191040-306161; range_end:; }","duration":"205.267382ms","start":"2021-09-15T19:11:57.404Z","end":"2021-09-15T19:11:57.609Z","steps":["trace[1804048700] 'agreement among raft nodes before linearized reading' (duration: 205.22617ms)"],"step_count":1}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2021-09-15T19:11:57.609Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"211.41271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-m4kfv\" ","response":"","error":"context canceled"}
{"level":"info","ts":"2021-09-15T19:11:57.609Z","caller":"traceutil/trace.go:171","msg":"trace[72393194] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-m4kfv; range_end:; }","duration":"211.45198ms","start":"2021-09-15T19:11:57.398Z","end":"2021-09-15T19:11:57.609Z","steps":["trace[72393194] 'agreement among raft nodes before linearized reading' (duration: 211.424329ms)"],"step_count":1}
WARNING: 2021/09/15 19:11:57 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2021-09-15T19:11:57.899Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:11:58.400Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:11:58.900Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:11:59.401Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:11:59.902Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238505855747361020,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-09-15T19:12:00.385Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"2.975183355s","expected-duration":"1s"}
{"level":"info","ts":"2021-09-15T19:12:00.386Z","caller":"traceutil/trace.go:171","msg":"trace[649689859] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"2.985361187s","start":"2021-09-15T19:11:57.401Z","end":"2021-09-15T19:12:00.386Z","steps":["trace[649689859] 'process raft request' (duration: 2.985215108s)"],"step_count":1}
{"level":"warn","ts":"2021-09-15T19:12:00.386Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T19:11:57.401Z","time spent":"2.985466989s","remote":"127.0.0.1:49570","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1114,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:491 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1041 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"info","ts":"2021-09-15T19:12:00.387Z","caller":"traceutil/trace.go:171","msg":"trace[1681736244] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:515; }","duration":"2.988488142s","start":"2021-09-15T19:11:57.398Z","end":"2021-09-15T19:12:00.386Z","steps":["trace[1681736244] 'read index received' (duration: 2.986831523s)","trace[1681736244] 'applied index is now lower than readState.Index' (duration: 1.655849ms)"],"step_count":2}
*
* ==> kernel <==
* 19:12:27 up 4:52, 0 users, load average: 6.32, 3.31, 1.83
Linux pause-20210915191040-306161 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [27693cb21ee8] <==
* I0915 19:11:57.612216 1 trace.go:205] Trace[90908807]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:4e96f995-20f4-46d9-9e91-48a3165f9dbc,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 19:11:54.869) (total time: 2742ms):
Trace[90908807]: [2.742216687s] [2.742216687s] END
E0915 19:11:57.612302 1 timeout.go:135] post-timeout activity - time-elapsed: 3.47907ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
E0915 19:11:57.613288 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.618821 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.620020 1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
E0915 19:11:57.621075 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.624382 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.634372 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0915 19:11:57.635509 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0915 19:11:57.636607 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0915 19:11:57.637770 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0915 19:11:57.638916 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0915 19:11:57.640203 1 timeout.go:135] post-timeout activity - time-elapsed: 31.371334ms, POST "/api/v1/namespaces/default/events" result: <nil>
I0915 19:11:57.641167 1 trace.go:205] Trace[1143014761]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:bb3c2312-8930-436b-870a-91869178ba4a,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 19:11:54.844) (total time: 2796ms):
Trace[1143014761]: [2.796835127s] [2.796835127s] END
E0915 19:11:57.641259 1 timeout.go:135] post-timeout activity - time-elapsed: 32.032015ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
E0915 19:11:57.642472 1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
I0915 19:11:57.643605 1 trace.go:205] Trace[137628608]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:0e2bebaf-ce34-4283-89d7-b9b51c771f61,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 19:11:56.470) (total time: 1173ms):
Trace[137628608]: [1.173147509s] [1.173147509s] END
E0915 19:11:57.643697 1 timeout.go:135] post-timeout activity - time-elapsed: 34.342272ms, POST "/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token" result: <nil>
I0915 19:11:57.644701 1 trace.go:205] Trace[840664150]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:8be84463-647a-4c66-8e45-50f775e917ef,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 19:11:56.069) (total time: 1574ms):
Trace[840664150]: [1.574755248s] [1.574755248s] END
E0915 19:11:57.644778 1 timeout.go:135] post-timeout activity - time-elapsed: 35.255523ms, POST "/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token" result: <nil>
E0915 19:11:57.648324 1 timeout.go:135] post-timeout activity - time-elapsed: 38.851875ms, GET "/api/v1/nodes/pause-20210915191040-306161" result: <nil>
*
* ==> kube-controller-manager [a5f849b43b1c] <==
* I0915 19:11:37.411967 1 shared_informer.go:247] Caches are synced for deployment
I0915 19:11:37.414182 1 shared_informer.go:247] Caches are synced for PVC protection
I0915 19:11:37.420105 1 shared_informer.go:247] Caches are synced for job
I0915 19:11:37.425007 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0915 19:11:37.438253 1 shared_informer.go:247] Caches are synced for cronjob
I0915 19:11:37.438264 1 shared_informer.go:247] Caches are synced for HPA
I0915 19:11:37.438289 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0915 19:11:37.438302 1 shared_informer.go:247] Caches are synced for attach detach
I0915 19:11:37.438587 1 shared_informer.go:247] Caches are synced for service account
I0915 19:11:37.438657 1 shared_informer.go:247] Caches are synced for expand
I0915 19:11:37.462666 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0915 19:11:37.523114 1 shared_informer.go:247] Caches are synced for resource quota
I0915 19:11:37.530198 1 shared_informer.go:247] Caches are synced for daemon sets
I0915 19:11:37.538172 1 shared_informer.go:247] Caches are synced for resource quota
I0915 19:11:37.544233 1 shared_informer.go:247] Caches are synced for stateful set
I0915 19:11:37.601354 1 shared_informer.go:247] Caches are synced for persistent volume
I0915 19:11:38.046450 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 19:11:38.110631 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 19:11:38.110664 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0915 19:11:38.127841 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I0915 19:11:38.332278 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-m4kfv"
I0915 19:11:38.429572 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-wgdfl"
I0915 19:11:38.437471 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-fvrsb"
I0915 19:11:38.578331 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
I0915 19:11:38.582612 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-fvrsb"
*
* ==> kube-proxy [d15f2b587bbf] <==
* I0915 19:11:39.363141 1 node.go:172] Successfully retrieved node IP: 192.168.58.2
I0915 19:11:39.363213 1 server_others.go:140] Detected node IP 192.168.58.2
W0915 19:11:39.363234 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I0915 19:11:39.600471 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0915 19:11:39.600519 1 server_others.go:212] Using iptables Proxier.
I0915 19:11:39.600534 1 server_others.go:219] creating dualStackProxier for iptables.
W0915 19:11:39.600550 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0915 19:11:39.600893 1 server.go:649] Version: v1.22.1
I0915 19:11:39.601741 1 config.go:315] Starting service config controller
I0915 19:11:39.601782 1 shared_informer.go:240] Waiting for caches to sync for service config
I0915 19:11:39.601802 1 config.go:224] Starting endpoint slice config controller
I0915 19:11:39.601807 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
E0915 19:11:39.608834 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-20210915191040-306161.16a5149bd9fff3a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048af3ae3db7be0, ext:474764259, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-pause-20210915191040-306161", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause
-20210915191040-306161", UID:"pause-20210915191040-306161", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "pause-20210915191040-306161.16a5149bd9fff3a7" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0915 19:11:39.703082 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0915 19:11:39.703101 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [b2b6bdeb15b9] <==
* E0915 19:11:22.694638 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0915 19:11:22.702299 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0915 19:11:22.702435 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0915 19:11:22.702528 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:22.702603 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0915 19:11:22.702702 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 19:11:22.702774 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0915 19:11:22.702846 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:22.702958 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 19:11:22.703036 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 19:11:22.703110 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:22.703205 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0915 19:11:22.703278 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:22.703421 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0915 19:11:22.703439 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0915 19:11:23.563383 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0915 19:11:23.568607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 19:11:23.616199 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:23.625013 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0915 19:11:23.682990 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0915 19:11:23.688603 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 19:11:23.721201 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0915 19:11:23.744652 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 19:11:23.777627 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
I0915 19:11:26.574973 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Wed 2021-09-15 19:10:43 UTC, end at Wed 2021-09-15 19:12:28 UTC. --
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895407 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/87e207cf7ed458dc963e57311bee91a6-etcd-certs\") pod \"etcd-pause-20210915191040-306161\" (UID: \"87e207cf7ed458dc963e57311bee91a6\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895438 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43c2d1987c2753ab3b743454432ec65-usr-share-ca-certificates\") pod \"kube-apiserver-pause-20210915191040-306161\" (UID: \"b43c2d1987c2753ab3b743454432ec65\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895463 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7de1da23176b119dfecc95ecb61dd307-etc-ca-certificates\") pod \"kube-controller-manager-pause-20210915191040-306161\" (UID: \"7de1da23176b119dfecc95ecb61dd307\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895490 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7de1da23176b119dfecc95ecb61dd307-kubeconfig\") pod \"kube-controller-manager-pause-20210915191040-306161\" (UID: \"7de1da23176b119dfecc95ecb61dd307\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895515 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/87e207cf7ed458dc963e57311bee91a6-etcd-data\") pod \"etcd-pause-20210915191040-306161\" (UID: \"87e207cf7ed458dc963e57311bee91a6\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895541 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43c2d1987c2753ab3b743454432ec65-etc-ca-certificates\") pod \"kube-apiserver-pause-20210915191040-306161\" (UID: \"b43c2d1987c2753ab3b743454432ec65\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895566 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b43c2d1987c2753ab3b743454432ec65-k8s-certs\") pod \"kube-apiserver-pause-20210915191040-306161\" (UID: \"b43c2d1987c2753ab3b743454432ec65\") "
Sep 15 19:11:54 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:54.895594 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43c2d1987c2753ab3b743454432ec65-usr-local-share-ca-certificates\") pod \"kube-apiserver-pause-20210915191040-306161\" (UID: \"b43c2d1987c2753ab3b743454432ec65\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.468719 4310 apiserver.go:52] "Watching apiserver"
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.472864 4310 topology_manager.go:200] "Topology Admit Handler"
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.473029 4310 topology_manager.go:200] "Topology Admit Handler"
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.473090 4310 topology_manager.go:200] "Topology Admit Handler"
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499643 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xmq\" (UniqueName: \"kubernetes.io/projected/6ec6737e-98cf-42bd-a911-9c89db377921-kube-api-access-r9xmq\") pod \"storage-provisioner\" (UID: \"6ec6737e-98cf-42bd-a911-9c89db377921\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499775 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/136bb615-a920-412f-b4ce-fdeeb75acfcb-xtables-lock\") pod \"kube-proxy-m4kfv\" (UID: \"136bb615-a920-412f-b4ce-fdeeb75acfcb\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499812 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6ec6737e-98cf-42bd-a911-9c89db377921-tmp\") pod \"storage-provisioner\" (UID: \"6ec6737e-98cf-42bd-a911-9c89db377921\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499841 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/136bb615-a920-412f-b4ce-fdeeb75acfcb-kube-proxy\") pod \"kube-proxy-m4kfv\" (UID: \"136bb615-a920-412f-b4ce-fdeeb75acfcb\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499868 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh642\" (UniqueName: \"kubernetes.io/projected/136bb615-a920-412f-b4ce-fdeeb75acfcb-kube-api-access-sh642\") pod \"kube-proxy-m4kfv\" (UID: \"136bb615-a920-412f-b4ce-fdeeb75acfcb\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499900 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf5qs\" (UniqueName: \"kubernetes.io/projected/1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83-kube-api-access-tf5qs\") pod \"coredns-78fcd69978-wgdfl\" (UID: \"1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499929 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/136bb615-a920-412f-b4ce-fdeeb75acfcb-lib-modules\") pod \"kube-proxy-m4kfv\" (UID: \"136bb615-a920-412f-b4ce-fdeeb75acfcb\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499956 4310 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83-config-volume\") pod \"coredns-78fcd69978-wgdfl\" (UID: \"1943d8e5-e7a1-4ef8-b68c-ba0f3b37df83\") "
Sep 15 19:11:55 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:55.499971 4310 reconciler.go:157] "Reconciler: start to sync state"
Sep 15 19:11:57 pause-20210915191040-306161 kubelet[4310]: I0915 19:11:57.602522 4310 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
Sep 15 19:11:57 pause-20210915191040-306161 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Sep 15 19:11:57 pause-20210915191040-306161 systemd[1]: kubelet.service: Succeeded.
Sep 15 19:11:57 pause-20210915191040-306161 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
*
* ==> storage-provisioner [bba215ebb412] <==
* I0915 19:11:51.762405 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0915 19:11:51.771218 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0915 19:11:51.771251 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0915 19:11:51.787868 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0915 19:11:51.788000 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210915191040-306161_cd66c3a2-5e14-4c9a-b16d-648567d27e6b!
I0915 19:11:51.788045 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4cc154c3-2477-4781-8800-3bd0adce3c8a", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210915191040-306161_cd66c3a2-5e14-4c9a-b16d-648567d27e6b became leader
I0915 19:11:51.888127 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210915191040-306161_cd66c3a2-5e14-4c9a-b16d-648567d27e6b!
-- /stdout --
** stderr **
E0915 19:12:27.765104 490169 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/PauseAgain (32.32s)