=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1435: (dbg) Run: kubectl --context functional-20220511231058-7184 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1441: (dbg) Run: kubectl --context functional-20220511231058-7184 expose deployment hello-node --type=NodePort --port=8080
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-swswq" [0c2db6df-37b9-4201-b3a9-44e6d839ff68] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-swswq" [0c2db6df-37b9-4201-b3a9-44e6d839ff68] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.0945525s
functional_test.go:1451: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service list: (7.33968s)
functional_test.go:1465: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service --namespace=default --https --url hello-node
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1394: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service --namespace=default --https --url hello-node: exit status 1 (32m2.3005163s)
-- stdout --
https://127.0.0.1:64099
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1467: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1404: service test failed - dumping debug information
functional_test.go:1405: -----------------------service failure post-mortem--------------------------------
functional_test.go:1408: (dbg) Run: kubectl --context functional-20220511231058-7184 describe po hello-node
functional_test.go:1412: hello-node pod describe:
Name: hello-node-54fbb85-swswq
Namespace: default
Priority: 0
Node: functional-20220511231058-7184/192.168.49.2
Start Time: Wed, 11 May 2022 23:18:39 +0000
Labels: app=hello-node
pod-template-hash=54fbb85
Annotations: <none>
Status: Running
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ReplicaSet/hello-node-54fbb85
Containers:
echoserver:
Container ID: docker://b131f8171a5cea2e9295bfdf416a54e8a4dd7b066ff46c81afefc4a1b6138ee9
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 11 May 2022 23:18:42 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cbt42 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-cbt42:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned default/hello-node-54fbb85-swswq to functional-20220511231058-7184
Normal Pulled 32m kubelet, functional-20220511231058-7184 Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal Created 32m kubelet, functional-20220511231058-7184 Created container echoserver
Normal Started 32m kubelet, functional-20220511231058-7184 Started container echoserver
Name: hello-node-connect-74cf8bc446-45d4d
Namespace: default
Priority: 0
Node: functional-20220511231058-7184/192.168.49.2
Start Time: Wed, 11 May 2022 23:17:39 +0000
Labels: app=hello-node-connect
pod-template-hash=74cf8bc446
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-connect-74cf8bc446
Containers:
echoserver:
Container ID: docker://8b3160aeac55c9783a8f576430c7a12c8654fb2ca05dd6d8c2e1b9b96bff8c4f
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 11 May 2022 23:18:33 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cdxjs (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-cdxjs:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 33m default-scheduler Successfully assigned default/hello-node-connect-74cf8bc446-45d4d to functional-20220511231058-7184
Normal Pulling 33m kubelet, functional-20220511231058-7184 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 32m kubelet, functional-20220511231058-7184 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 47.1956236s
Normal Created 32m kubelet, functional-20220511231058-7184 Created container echoserver
Normal Started 32m kubelet, functional-20220511231058-7184 Started container echoserver
functional_test.go:1414: (dbg) Run: kubectl --context functional-20220511231058-7184 logs -l app=hello-node
functional_test.go:1418: hello-node logs:
functional_test.go:1420: (dbg) Run: kubectl --context functional-20220511231058-7184 describe svc hello-node
functional_test.go:1424: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.99.234.103
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31921/TCP
Endpoints: 172.17.0.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-20220511231058-7184
helpers_test.go:231: (dbg) Done: docker inspect functional-20220511231058-7184: (1.06803s)
helpers_test.go:235: (dbg) docker inspect functional-20220511231058-7184:
-- stdout --
[
{
"Id": "03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e",
"Created": "2022-05-11T23:11:54.2093463Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 21016,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-05-11T23:11:55.1892393Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
"ResolvConfPath": "/var/lib/docker/containers/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e/hostname",
"HostsPath": "/var/lib/docker/containers/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e/hosts",
"LogPath": "/var/lib/docker/containers/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e-json.log",
"Name": "/functional-20220511231058-7184",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-20220511231058-7184:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-20220511231058-7184",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b7ba6a2e1d7aa95e3e95797f43b541ad015c3547d42b5e8ff611b22b2b7a2610-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
"MergedDir": "/var/lib/docker/overlay2/b7ba6a2e1d7aa95e3e95797f43b541ad015c3547d42b5e8ff611b22b2b7a2610/merged",
"UpperDir": "/var/lib/docker/overlay2/b7ba6a2e1d7aa95e3e95797f43b541ad015c3547d42b5e8ff611b22b2b7a2610/diff",
"WorkDir": "/var/lib/docker/overlay2/b7ba6a2e1d7aa95e3e95797f43b541ad015c3547d42b5e8ff611b22b2b7a2610/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-20220511231058-7184",
"Source": "/var/lib/docker/volumes/functional-20220511231058-7184/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-20220511231058-7184",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-20220511231058-7184",
"name.minikube.sigs.k8s.io": "functional-20220511231058-7184",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2fbef82a382778c047170c6728b78eff526a37dc48d3a1b6bab2c12784116af8",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63732"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63728"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63729"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63730"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63731"
}
]
},
"SandboxKey": "/var/run/docker/netns/2fbef82a3827",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-20220511231058-7184": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"03f6e31851f4",
"functional-20220511231058-7184"
],
"NetworkID": "9bc7760fe8956141b37970dabd4c2de8f9f54cc49f02c83af1d07ae10d266b63",
"EndpointID": "571a75d2ebe2c13326f209c314de4e50d603277e93cca451109f303a24d608bc",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220511231058-7184 -n functional-20220511231058-7184
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220511231058-7184 -n functional-20220511231058-7184: (6.5657883s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 logs -n 25: (8.3836959s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
| image | functional-20220511231058-7184 image save | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:17 GMT | 11 May 22 23:17 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220511231058-7184 | | | | | |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-20220511231058-7184 image rm | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:17 GMT | 11 May 22 23:17 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220511231058-7184 | | | | | |
| image | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:17 GMT | 11 May 22 23:17 GMT |
| | image ls | | | | | |
| image | functional-20220511231058-7184 image load | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:17 GMT | 11 May 22 23:18 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
| | image ls | | | | | |
| image | functional-20220511231058-7184 image save --daemon | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220511231058-7184 | | | | | |
| cp | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
| | cp testdata\cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
| | ssh -n | | | | | |
| | functional-20220511231058-7184 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-20220511231058-7184 cp functional-20220511231058-7184:/home/docker/cp-test.txt | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
| | C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3903165102\001\cp-test.txt | | | | | |
| ssh | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
| | ssh -n | | | | | |
| | functional-20220511231058-7184 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| service | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
| | service list | | | | | |
| profile | list --output json | minikube | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
| profile | list | minikube | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:19 GMT |
| profile | list -l | minikube | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| profile | list -o json | minikube | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| profile | list -o json --light | minikube | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| update-context | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| | image ls --format short | | | | | |
| image | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| | image ls --format yaml | | | | | |
| image | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
| | image ls --format json | | | | | |
| image | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:20 GMT |
| | image ls --format table | | | | | |
| image | functional-20220511231058-7184 image build -t | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:20 GMT |
| | localhost/my-image:functional-20220511231058-7184 | | | | | |
| | testdata\build | | | | | |
| image | functional-20220511231058-7184 | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:20 GMT | 11 May 22 23:20 GMT |
| | image ls | | | | | |
|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/05/11 23:19:06
Running on machine: minikube4
Binary: Built with gc go1.18.1 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0511 23:19:06.698773 9584 out.go:296] Setting OutFile to fd 800 ...
I0511 23:19:06.757884 9584 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0511 23:19:06.757884 9584 out.go:309] Setting ErrFile to fd 572...
I0511 23:19:06.757884 9584 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0511 23:19:06.772410 9584 out.go:303] Setting JSON to false
I0511 23:19:06.774880 9584 start.go:115] hostinfo: {"hostname":"minikube4","uptime":9600,"bootTime":1652301546,"procs":167,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
W0511 23:19:06.774945 9584 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0511 23:19:06.779150 9584 out.go:177] * [functional-20220511231058-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
I0511 23:19:06.783005 9584 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
I0511 23:19:06.785051 9584 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
I0511 23:19:06.788009 9584 out.go:177] - MINIKUBE_LOCATION=13639
I0511 23:19:06.790019 9584 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0511 23:19:06.793788 9584 config.go:178] Loaded profile config "functional-20220511231058-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
I0511 23:19:06.795256 9584 driver.go:358] Setting default libvirt URI to qemu:///system
I0511 23:19:09.423075 9584 docker.go:137] docker version: linux-20.10.14
I0511 23:19:09.431754 9584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0511 23:19:11.509117 9584 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0772594s)
I0511 23:19:11.509117 9584 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-11 23:19:10.4452228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0511 23:19:11.513113 9584 out.go:177] * Using the docker driver based on existing profile
I0511 23:19:11.517656 9584 start.go:284] selected driver: docker
I0511 23:19:11.517712 9584 start.go:801] validating driver "docker" against &{Name:functional-20220511231058-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511231058-7184 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0511 23:19:11.518055 9584 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0511 23:19:11.543847 9584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0511 23:19:13.666248 9584 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1222954s)
I0511 23:19:13.666648 9584 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-11 23:19:12.5944059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0511 23:19:15.049183 9584 cni.go:95] Creating CNI manager for ""
I0511 23:19:15.049183 9584 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0511 23:19:15.049716 9584 start_flags.go:306] config:
{Name:functional-20220511231058-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511231058-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisione
r-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0511 23:19:15.053088 9584 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Wed 2022-05-11 23:11:55 UTC, end at Wed 2022-05-11 23:51:13 UTC. --
May 11 23:13:11 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:13:11.986682600Z" level=info msg="ignoring event" container=46b5638c21176fa40751b7ed9541c4ea5c01223705ebe6b9b99c1b268479f66e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.686348000Z" level=info msg="ignoring event" container=94b39cd31ab0c72c81972d03ac089a6623359390d6905d683930f2933abedf9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.791479600Z" level=info msg="ignoring event" container=b3d71c27cc01f5ffccc9b1f78f0b44ea932b066232616ea4bed579112cca1639 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.793385200Z" level=info msg="ignoring event" container=1d317d3b1b9d543fcb1eb150b6f05fdc65bd939a340f88724c115fbf3e5df0c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.886629900Z" level=info msg="ignoring event" container=840d6d1e99cbc0fac642f4a4f9a98b10f11de239630bedbf583cae69b7e41439 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.887410500Z" level=info msg="ignoring event" container=81972fb122db21bbc81d05c3c7958ca071de16b06b57384065757e3b284492ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.888686400Z" level=info msg="ignoring event" container=d1e04917fa9dd04a1a1e0d5398a4b8e34aa08916f31d36fa393fa076f4e33c72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.990975200Z" level=info msg="ignoring event" container=b980e1dc9dff756563d38dedde660771a3ea2052ce403ed563d5b9e756c71b67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.991042600Z" level=info msg="ignoring event" container=1a6e3c895b281388a82578725431a660628a2ef732fb191777bcae11bd2409e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.991085800Z" level=info msg="ignoring event" container=4917c52c05aedf627fa50bf2d549a5070e0be28363992ab60d160c83109eeb9d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:22 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:22.084176400Z" level=info msg="ignoring event" container=bb0212ae9a4f0846fc90e1c61d9b3264900b36b8b3eefbe2b1815dced3d8b8d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:22 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:22.101679700Z" level=info msg="ignoring event" container=35d20597a3031cf7712c0fd37f094f0b1b0e73d1a8b3e808446e4bc82c309e96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:23 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:23.303724300Z" level=info msg="ignoring event" container=7250147c7b86e30fcb1b3a1ff069738a13ab18480ab607b1bf342c7ba726ea1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:23 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:23.444949900Z" level=info msg="ignoring event" container=f6b592ff517b6c80718b42d2c2b0cb4915c33cd06e0547ff70bb368316762e18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:26 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:26.396920900Z" level=info msg="ignoring event" container=054b5a4259fc6c6b3078fe1463cdf144c5fb70cdb1ae26bce699f83ae80a8a46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:26 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:26.884275600Z" level=info msg="ignoring event" container=820bc561a4b95cd5d3c5e66a6e87a706208adf1710e10bf364addbfb9905eab3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:37 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:37.895345900Z" level=info msg="ignoring event" container=581faf92a79bc1afb71117206b24bb52343b9e3f07253179ae4b0bd50efcbf9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:38 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:38.390974800Z" level=info msg="ignoring event" container=586a63724b4eab4e0886094bc28f383ce3be23b90d0e4a05b8ab05f7857af696 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:38 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:38.485239400Z" level=info msg="ignoring event" container=cd253946b71d20ba6c9790c5c63663b315acd6385b962cc2e898a2af78011a7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:38 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:38.922683100Z" level=info msg="ignoring event" container=0b073da11a84575abffd481ee4a9189ab0ea3ed5275edcb9bd260e2e6a69f683 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:15:46 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:46.005625700Z" level=info msg="ignoring event" container=786a5a9d87a93c95097481a015911f9d2570282a7470c793903ed965c4867e82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:17:25 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:17:25.101446800Z" level=info msg="ignoring event" container=989cc49a569115d47f4b5af21843899ec02717efdb51fcc5a493944b4abcb8eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:17:25 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:17:25.570631200Z" level=info msg="ignoring event" container=4554d1c83b89c3955db419414fc517accead4c678647f4be2667aae3404160ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:20:03 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:20:03.639717500Z" level=info msg="ignoring event" container=d83413bface64a373bc0040660ced87c8eb5f7b4549f5d1ca6351bfed3e955e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 23:20:04 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:20:04.266227700Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
b131f8171a5ce 82e4c8a736a4f 32 minutes ago Running echoserver 0 bcbdd1bb4acb8
8b3160aeac55c k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 32 minutes ago Running echoserver 0 e766462e96943
7822ed0d85399 mysql@sha256:16e159331007eccc069822f7b731272043ed572a79a196a05ffa2ea127caaf67 32 minutes ago Running mysql 0 18bc3c9904b04
31f7310ad3ac2 nginx@sha256:19da26bd6ef0468ac8ef5c03f01ce1569a4dbfb82d4d7b7ffbd7aed16ad3eb46 33 minutes ago Running myfrontend 0 f2b1f727d19a5
c971251e39c4b nginx@sha256:5a0df7fb7c8c03e4158ae9974bfbd6a15da2bdfdeded4fb694367ec812325d31 34 minutes ago Running nginx 0 f111573c9fb7a
cbc60d45890a7 6e38f40d628db 35 minutes ago Running storage-provisioner 3 6323969a7b288
57d3d67362af7 b0c9e5e4dbb14 35 minutes ago Running kube-controller-manager 2 af21574f7d7d5
4ef13dc1d0152 3fc1d62d65872 35 minutes ago Running kube-apiserver 1 3aadaf90b1667
f18fca8e8bda6 a4ca41631cc7a 35 minutes ago Running coredns 1 f97e337244440
0b073da11a845 6e38f40d628db 35 minutes ago Exited storage-provisioner 2 6323969a7b288
581faf92a79bc 3fc1d62d65872 35 minutes ago Exited kube-apiserver 0 3aadaf90b1667
b592b11477259 884d49d6d8c9f 35 minutes ago Running kube-scheduler 1 90c7267330c22
3d613c6cc10dc 25f8c7f3da61c 35 minutes ago Running etcd 1 915e885bbb52e
786a5a9d87a93 b0c9e5e4dbb14 35 minutes ago Exited kube-controller-manager 1 af21574f7d7d5
c64417fcd163e 3c53fa8541f95 35 minutes ago Running kube-proxy 1 01f72a604ccd6
820bc561a4b95 a4ca41631cc7a 38 minutes ago Exited coredns 0 4917c52c05aed
d1e04917fa9dd 3c53fa8541f95 38 minutes ago Exited kube-proxy 0 1a6e3c895b281
f6b592ff517b6 884d49d6d8c9f 38 minutes ago Exited kube-scheduler 0 b980e1dc9dff7
35d20597a3031 25f8c7f3da61c 38 minutes ago Exited etcd 0 840d6d1e99cbc
*
* ==> coredns [820bc561a4b9] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
[INFO] Reloading complete
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [f18fca8e8bda] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: functional-20220511231058-7184
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20220511231058-7184
kubernetes.io/os=linux
minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
minikube.k8s.io/name=functional-20220511231058-7184
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_05_11T23_12_46_0700
minikube.k8s.io/version=v1.25.2
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 11 May 2022 23:12:42 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20220511231058-7184
AcquireTime: <unset>
RenewTime: Wed, 11 May 2022 23:51:14 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 11 May 2022 23:46:09 +0000 Wed, 11 May 2022 23:12:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 11 May 2022 23:46:09 +0000 Wed, 11 May 2022 23:12:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 11 May 2022 23:46:09 +0000 Wed, 11 May 2022 23:12:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 11 May 2022 23:46:09 +0000 Wed, 11 May 2022 23:12:58 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-20220511231058-7184
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: 8556a0a9a0e64ba4b825f672d2dce0b9
System UUID: 8556a0a9a0e64ba4b825f672d2dce0b9
Boot ID: 10186544-b659-4889-afdb-c2512535b797
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.15
Kubelet Version: v1.23.5
Kube-Proxy Version: v1.23.5
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-54fbb85-swswq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32m
default hello-node-connect-74cf8bc446-45d4d 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default mysql-b87c45988-v7bjw 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 33m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
kube-system coredns-64897985d-cvj5g 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-20220511231058-7184 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-20220511231058-7184 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
kube-system kube-controller-manager-functional-20220511231058-7184 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-q6649 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-20220511231058-7184 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 35m kube-proxy
Normal Starting 38m kube-proxy
Normal NodeHasNoDiskPressure 38m (x5 over 38m) kubelet Node functional-20220511231058-7184 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m (x5 over 38m) kubelet Node functional-20220511231058-7184 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 38m (x6 over 38m) kubelet Node functional-20220511231058-7184 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-20220511231058-7184 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-20220511231058-7184 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 38m kubelet Node functional-20220511231058-7184 status is now: NodeHasSufficientMemory
Normal Starting 38m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-20220511231058-7184 status is now: NodeReady
Normal Starting 35m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 35m (x8 over 35m) kubelet Node functional-20220511231058-7184 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 35m (x8 over 35m) kubelet Node functional-20220511231058-7184 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 35m (x7 over 35m) kubelet Node functional-20220511231058-7184 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 35m kubelet Updated Node Allocatable limit across pods
*
* ==> dmesg <==
* [May11 23:26] WSL2: Performing memory compaction.
[May11 23:27] WSL2: Performing memory compaction.
[May11 23:28] WSL2: Performing memory compaction.
[May11 23:29] WSL2: Performing memory compaction.
[May11 23:30] WSL2: Performing memory compaction.
[May11 23:31] WSL2: Performing memory compaction.
[May11 23:32] WSL2: Performing memory compaction.
[May11 23:33] WSL2: Performing memory compaction.
[May11 23:34] WSL2: Performing memory compaction.
[May11 23:35] WSL2: Performing memory compaction.
[May11 23:36] WSL2: Performing memory compaction.
[May11 23:37] WSL2: Performing memory compaction.
[May11 23:38] WSL2: Performing memory compaction.
[May11 23:39] WSL2: Performing memory compaction.
[May11 23:40] WSL2: Performing memory compaction.
[May11 23:41] WSL2: Performing memory compaction.
[May11 23:42] WSL2: Performing memory compaction.
[May11 23:43] WSL2: Performing memory compaction.
[May11 23:44] WSL2: Performing memory compaction.
[May11 23:45] WSL2: Performing memory compaction.
[May11 23:46] WSL2: Performing memory compaction.
[May11 23:47] WSL2: Performing memory compaction.
[May11 23:48] WSL2: Performing memory compaction.
[May11 23:49] WSL2: Performing memory compaction.
[May11 23:50] WSL2: Performing memory compaction.
*
* ==> etcd [35d20597a303] <==
* {"level":"info","ts":"2022-05-11T23:12:37.594Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-05-11T23:12:37.594Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-05-11T23:12:37.596Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-05-11T23:12:37.675Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2022-05-11T23:12:37.675Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-05-11T23:12:37.675Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2022-05-11T23:12:37.676Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2022-05-11T23:12:37.677Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-05-11T23:12:37.678Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"warn","ts":"2022-05-11T23:12:42.199Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.8011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/system-node-high\" ","response":"range_response_count:0 size:4"}
{"level":"info","ts":"2022-05-11T23:12:42.199Z","caller":"traceutil/trace.go:171","msg":"trace[799166586] range","detail":"{range_begin:/registry/flowschemas/system-node-high; range_end:; response_count:0; response_revision:16; }","duration":"112.9992ms","start":"2022-05-11T23:12:42.086Z","end":"2022-05-11T23:12:42.199Z","steps":["trace[799166586] 'agreement among raft nodes before linearized reading' (duration: 98.8954ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-11T23:12:42.199Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-2qrwc\" ","response":"range_response_count:1 size:942"}
{"level":"info","ts":"2022-05-11T23:12:42.199Z","caller":"traceutil/trace.go:171","msg":"trace[765925878] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-2qrwc; range_end:; response_count:1; response_revision:16; }","duration":"113.0938ms","start":"2022-05-11T23:12:42.086Z","end":"2022-05-11T23:12:42.199Z","steps":["trace[765925878] 'agreement among raft nodes before linearized reading' (duration: 98.8066ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-11T23:12:42.199Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.9243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-20220511231058-7184\" ","response":"range_response_count:1 size:2902"}
{"level":"info","ts":"2022-05-11T23:12:42.200Z","caller":"traceutil/trace.go:171","msg":"trace[1112470360] range","detail":"{range_begin:/registry/minions/functional-20220511231058-7184; range_end:; response_count:1; response_revision:16; }","duration":"113.4194ms","start":"2022-05-11T23:12:42.086Z","end":"2022-05-11T23:12:42.200Z","steps":["trace[1112470360] 'agreement among raft nodes before linearized reading' (duration: 98.9282ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-11T23:13:06.604Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.4916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-cvj5g\" ","response":"range_response_count:1 size:4343"}
{"level":"info","ts":"2022-05-11T23:13:06.604Z","caller":"traceutil/trace.go:171","msg":"trace[1742312913] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-cvj5g; range_end:; response_count:1; response_revision:460; }","duration":"116.6522ms","start":"2022-05-11T23:13:06.488Z","end":"2022-05-11T23:13:06.604Z","steps":["trace[1742312913] 'agreement among raft nodes before linearized reading' (duration: 97.7739ms)","trace[1742312913] 'range keys from in-memory index tree' (duration: 18.6838ms)"],"step_count":2}
{"level":"info","ts":"2022-05-11T23:15:21.495Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-05-11T23:15:21.495Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220511231058-7184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/05/11 23:15:21 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/05/11 23:15:21 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-05-11T23:15:21.683Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-05-11T23:15:21.883Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-05-11T23:15:21.885Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-05-11T23:15:21.885Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220511231058-7184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> etcd [3d613c6cc10d] <==
* {"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"163.5896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/nginx-svc\" ","response":"range_response_count:1 size:1130"}
{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.1830982s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10950"}
{"level":"info","ts":"2022-05-11T23:18:17.295Z","caller":"traceutil/trace.go:171","msg":"trace[676878884] range","detail":"{range_begin:/registry/services/specs/default/nginx-svc; range_end:; response_count:1; response_revision:850; }","duration":"163.6942ms","start":"2022-05-11T23:18:17.131Z","end":"2022-05-11T23:18:17.295Z","steps":["trace[676878884] 'agreement among raft nodes before linearized reading' (duration: 163.6126ms)"],"step_count":1}
{"level":"info","ts":"2022-05-11T23:18:17.295Z","caller":"traceutil/trace.go:171","msg":"trace[1720153301] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:850; }","duration":"1.1831577s","start":"2022-05-11T23:18:16.112Z","end":"2022-05-11T23:18:17.295Z","steps":["trace[1720153301] 'agreement among raft nodes before linearized reading' (duration: 1.1830238s)"],"step_count":1}
{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-11T23:18:16.112Z","time spent":"1.183216s","remote":"127.0.0.1:41402","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":10974,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.2401692s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-05-11T23:18:17.295Z","caller":"traceutil/trace.go:171","msg":"trace[1439899376] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:850; }","duration":"1.2402227s","start":"2022-05-11T23:18:16.055Z","end":"2022-05-11T23:18:17.295Z","steps":["trace[1439899376] 'agreement among raft nodes before linearized reading' (duration: 1.2401354s)"],"step_count":1}
{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-11T23:18:16.055Z","time spent":"1.24028s","remote":"127.0.0.1:41486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.0695196s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10950"}
{"level":"info","ts":"2022-05-11T23:18:17.296Z","caller":"traceutil/trace.go:171","msg":"trace[170829236] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:850; }","duration":"1.0696713s","start":"2022-05-11T23:18:16.226Z","end":"2022-05-11T23:18:17.295Z","steps":["trace[170829236] 'agreement among raft nodes before linearized reading' (duration: 1.0693492s)"],"step_count":1}
{"level":"warn","ts":"2022-05-11T23:18:17.296Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-11T23:18:16.226Z","time spent":"1.0698127s","remote":"127.0.0.1:41402","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":10974,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2022-05-11T23:18:18.409Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"178.1684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10950"}
{"level":"info","ts":"2022-05-11T23:18:18.409Z","caller":"traceutil/trace.go:171","msg":"trace[1913611101] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:852; }","duration":"178.4104ms","start":"2022-05-11T23:18:18.230Z","end":"2022-05-11T23:18:18.409Z","steps":["trace[1913611101] 'range keys from in-memory index tree' (duration: 178.0301ms)"],"step_count":1}
{"level":"info","ts":"2022-05-11T23:25:42.345Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":997}
{"level":"info","ts":"2022-05-11T23:25:42.346Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":997,"took":"1.1418ms"}
{"level":"info","ts":"2022-05-11T23:30:42.375Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1208}
{"level":"info","ts":"2022-05-11T23:30:42.376Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1208,"took":"572.6µs"}
{"level":"info","ts":"2022-05-11T23:35:42.412Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1416}
{"level":"info","ts":"2022-05-11T23:35:42.413Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1416,"took":"557µs"}
{"level":"info","ts":"2022-05-11T23:40:42.440Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1626}
{"level":"info","ts":"2022-05-11T23:40:42.441Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1626,"took":"665.9µs"}
{"level":"info","ts":"2022-05-11T23:45:42.472Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1836}
{"level":"info","ts":"2022-05-11T23:45:42.473Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1836,"took":"613.5µs"}
{"level":"info","ts":"2022-05-11T23:50:42.500Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2046}
{"level":"info","ts":"2022-05-11T23:50:42.501Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2046,"took":"550.8µs"}
*
* ==> kernel <==
* 23:51:14 up 59 min, 0 users, load average: 0.37, 0.30, 0.38
Linux functional-20220511231058-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [4ef13dc1d015] <==
* Trace[1497027280]: [1.104645s] [1.104645s] END
I0511 23:18:16.035673 1 trace.go:205] Trace[2037031136]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (11-May-2022 23:18:13.291) (total time: 2744ms):
Trace[2037031136]: [2.7443687s] [2.7443687s] END
I0511 23:18:16.036433 1 trace.go:205] Trace[629667156]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:44094c38-f711-40de-9f45-07099914c476,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (11-May-2022 23:18:13.291) (total time: 2745ms):
Trace[629667156]: ---"Listing from storage done" 2744ms (23:18:16.035)
Trace[629667156]: [2.745172s] [2.745172s] END
I0511 23:18:17.296385 1 trace.go:205] Trace[1756583222]: "GuaranteedUpdate etcd3" type:*core.Endpoints (11-May-2022 23:18:16.049) (total time: 1246ms):
Trace[1756583222]: ---"Transaction committed" 1245ms (23:18:17.296)
Trace[1756583222]: [1.2463771s] [1.2463771s] END
I0511 23:18:17.296748 1 trace.go:205] Trace[1060657797]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:70d6ce70-4a47-4be5-89d9-9130eee5da57,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (11-May-2022 23:18:16.049) (total time: 1247ms):
Trace[1060657797]: ---"Object stored in database" 1246ms (23:18:17.296)
Trace[1060657797]: [1.2472534s] [1.2472534s] END
I0511 23:18:17.297702 1 trace.go:205] Trace[1987176994]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (11-May-2022 23:18:16.111) (total time: 1186ms):
Trace[1987176994]: [1.1864729s] [1.1864729s] END
I0511 23:18:17.297713 1 trace.go:205] Trace[1545867522]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (11-May-2022 23:18:16.225) (total time: 1072ms):
Trace[1545867522]: [1.0723974s] [1.0723974s] END
I0511 23:18:17.298245 1 trace.go:205] Trace[1773512067]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:1c48ec6c-8003-42f7-ae00-beaaf6f9840e,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (11-May-2022 23:18:16.111) (total time: 1187ms):
Trace[1773512067]: ---"Listing from storage done" 1186ms (23:18:17.297)
Trace[1773512067]: [1.1870671s] [1.1870671s] END
I0511 23:18:17.299712 1 trace.go:205] Trace[2025221454]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:3b14786b-32d7-4840-a3d1-8bbbccb80842,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (11-May-2022 23:18:16.225) (total time: 1074ms):
Trace[2025221454]: ---"Listing from storage done" 1072ms (23:18:17.298)
Trace[2025221454]: [1.0744501s] [1.0744501s] END
I0511 23:18:39.813542 1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.99.234.103]
W0511 23:31:11.160042 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
W0511 23:45:34.250494 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
*
* ==> kube-apiserver [581faf92a79b] <==
* I0511 23:15:37.790781 1 server.go:565] external host was not specified, using 192.168.49.2
I0511 23:15:37.792318 1 server.go:172] Version: v1.23.5
E0511 23:15:37.793020 1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
*
* ==> kube-controller-manager [57d3d67362af] <==
* I0511 23:16:00.087753 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal.
I0511 23:16:00.088103 1 shared_informer.go:247] Caches are synced for persistent volume
I0511 23:16:00.089337 1 event.go:294] "Event occurred" object="functional-20220511231058-7184" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220511231058-7184 event: Registered Node functional-20220511231058-7184 in Controller"
I0511 23:16:00.091226 1 shared_informer.go:247] Caches are synced for daemon sets
I0511 23:16:00.092020 1 shared_informer.go:247] Caches are synced for ReplicationController
I0511 23:16:00.108637 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0511 23:16:00.185042 1 shared_informer.go:247] Caches are synced for attach detach
I0511 23:16:00.195225 1 shared_informer.go:247] Caches are synced for cronjob
I0511 23:16:00.195402 1 shared_informer.go:247] Caches are synced for resource quota
I0511 23:16:00.198862 1 shared_informer.go:247] Caches are synced for job
I0511 23:16:00.199096 1 shared_informer.go:247] Caches are synced for TTL after finished
I0511 23:16:00.285661 1 shared_informer.go:247] Caches are synced for disruption
I0511 23:16:00.285826 1 disruption.go:371] Sending events to api server.
I0511 23:16:00.285781 1 shared_informer.go:247] Caches are synced for resource quota
I0511 23:16:00.689474 1 shared_informer.go:247] Caches are synced for garbage collector
I0511 23:16:00.689578 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0511 23:16:00.709700 1 shared_informer.go:247] Caches are synced for garbage collector
I0511 23:16:57.190062 1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0511 23:16:57.190231 1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0511 23:17:30.215041 1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
I0511 23:17:30.401067 1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-v7bjw"
I0511 23:17:39.191119 1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
I0511 23:17:39.292950 1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-45d4d"
I0511 23:18:39.455195 1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
I0511 23:18:39.470795 1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-swswq"
*
* ==> kube-controller-manager [786a5a9d87a9] <==
* /usr/local/go/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00031d500, {0x4d4fe80, 0xc0006dc0a0}, 0x8ef)
/usr/local/go/src/crypto/tls/conn.go:799 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc00031d500, 0x0)
/usr/local/go/src/crypto/tls/conn.go:606 +0x112
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0xc00031d500, {0xc000d40000, 0x1000, 0x919560})
/usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
bufio.(*Reader).Read(0xc0003b13e0, {0xc000d2a120, 0x9, 0x934bc2})
/usr/local/go/src/bufio/bufio.go:227 +0x1b4
io.ReadAtLeast({0x4d47860, 0xc0003b13e0}, {0xc000d2a120, 0x9, 0x9}, 0x9)
/usr/local/go/src/io/io.go:328 +0x9a
io.ReadFull(...)
/usr/local/go/src/io/io.go:347
k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc000d2a120, 0x9, 0xc001f7d3e0}, {0x4d47860, 0xc0003b13e0})
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x6e
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000d2a0e0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:498 +0x95
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000aaaf98)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2101 +0x130
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000d3e000)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5
*
* ==> kube-proxy [c64417fcd163] <==
* E0511 23:15:25.807112 1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I0511 23:15:25.887727 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0511 23:15:25.891061 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0511 23:15:25.894054 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0511 23:15:25.897705 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0511 23:15:25.900800 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0511 23:15:25.903815 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220511231058-7184": dial tcp 192.168.49.2:8441: connect: connection refused
E0511 23:15:26.984233 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220511231058-7184": dial tcp 192.168.49.2:8441: connect: connection refused
E0511 23:15:36.087665 1 node.go:152] Failed to retrieve node info: nodes "functional-20220511231058-7184" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
E0511 23:15:40.222898 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220511231058-7184": dial tcp 192.168.49.2:8441: connect: connection refused
I0511 23:15:49.002000 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0511 23:15:49.002075 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0511 23:15:49.002248 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0511 23:15:49.210604 1 server_others.go:206] "Using iptables Proxier"
I0511 23:15:49.210824 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0511 23:15:49.210846 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0511 23:15:49.210954 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0511 23:15:49.211781 1 server.go:656] "Version info" version="v1.23.5"
I0511 23:15:49.212709 1 config.go:317] "Starting service config controller"
I0511 23:15:49.212838 1 config.go:226] "Starting endpoint slice config controller"
I0511 23:15:49.212885 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0511 23:15:49.212886 1 shared_informer.go:240] Waiting for caches to sync for service config
I0511 23:15:49.314174 1 shared_informer.go:247] Caches are synced for service config
I0511 23:15:49.314367 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-proxy [d1e04917fa9d] <==
* E0511 23:13:03.998105 1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I0511 23:13:04.076219 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0511 23:13:04.084843 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0511 23:13:04.088049 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0511 23:13:04.091094 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0511 23:13:04.094880 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I0511 23:13:04.378379 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0511 23:13:04.378561 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0511 23:13:04.378657 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0511 23:13:04.690252 1 server_others.go:206] "Using iptables Proxier"
I0511 23:13:04.690427 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0511 23:13:04.690442 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0511 23:13:04.690473 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0511 23:13:04.691292 1 server.go:656] "Version info" version="v1.23.5"
I0511 23:13:04.692354 1 config.go:226] "Starting endpoint slice config controller"
I0511 23:13:04.692504 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0511 23:13:04.692710 1 config.go:317] "Starting service config controller"
I0511 23:13:04.692727 1 shared_informer.go:240] Waiting for caches to sync for service config
I0511 23:13:04.792728 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0511 23:13:04.793082 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [b592b1147725] <==
* I0511 23:15:36.096210 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I0511 23:15:36.096798 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0511 23:15:36.188763 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0511 23:15:36.188806 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0511 23:15:36.191208 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0511 23:15:36.191387 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0511 23:15:36.191497 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0511 23:15:36.191554 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0511 23:15:36.191596 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I0511 23:15:36.196462 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0511 23:15:45.391917 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
E0511 23:15:45.483432 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
E0511 23:15:45.483551 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
E0511 23:15:45.483639 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
E0511 23:15:45.483821 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
E0511 23:15:45.483936 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
E0511 23:15:45.484268 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
E0511 23:15:45.484554 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
E0511 23:15:45.484687 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
E0511 23:15:45.484858 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
E0511 23:15:45.485098 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
E0511 23:15:45.485223 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0511 23:15:45.485386 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
E0511 23:15:45.485529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
E0511 23:15:45.489100 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
*
* ==> kube-scheduler [f6b592ff517b] <==
* E0511 23:12:43.322282 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0511 23:12:43.377562 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0511 23:12:43.377698 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0511 23:12:43.413404 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0511 23:12:43.413530 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0511 23:12:43.478082 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0511 23:12:43.478132 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0511 23:12:43.530843 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0511 23:12:43.531072 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0511 23:12:43.577224 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0511 23:12:43.577365 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0511 23:12:43.597323 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0511 23:12:43.598137 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0511 23:12:43.610261 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0511 23:12:43.610393 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0511 23:12:43.627096 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0511 23:12:43.627235 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0511 23:12:43.777146 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0511 23:12:43.777263 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0511 23:12:43.778001 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0511 23:12:43.778129 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0511 23:12:46.484430 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0511 23:15:21.688634 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0511 23:15:21.689615 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0511 23:15:21.690032 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
*
* ==> kubelet <==
* -- Logs begin at Wed 2022-05-11 23:11:55 UTC, end at Wed 2022-05-11 23:51:15 UTC. --
May 11 23:17:30 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:30.593324 6148 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7tcf\" (UniqueName: \"kubernetes.io/projected/13b3d752-6f39-45ca-88ec-1269924c718e-kube-api-access-g7tcf\") pod \"mysql-b87c45988-v7bjw\" (UID: \"13b3d752-6f39-45ca-88ec-1269924c718e\") " pod="default/mysql-b87c45988-v7bjw"
May 11 23:17:31 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:31.994227 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-v7bjw through plugin: invalid network status for"
May 11 23:17:31 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:31.994792 6148 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="18bc3c9904b049acbe9709de816ea0c14a58a05cddad165d5da363cb5bd26c70"
May 11 23:17:33 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:33.013745 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-v7bjw through plugin: invalid network status for"
May 11 23:17:39 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:39.392560 6148 topology_manager.go:200] "Topology Admit Handler"
May 11 23:17:39 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:39.505583 6148 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxjs\" (UniqueName: \"kubernetes.io/projected/d6e35845-c328-4eda-87b4-8ae2f5d132bf-kube-api-access-cdxjs\") pod \"hello-node-connect-74cf8bc446-45d4d\" (UID: \"d6e35845-c328-4eda-87b4-8ae2f5d132bf\") " pod="default/hello-node-connect-74cf8bc446-45d4d"
May 11 23:17:41 functional-20220511231058-7184 kubelet[6148]: E0511 23:17:41.600535 6148 kuberuntime_manager.go:1065] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: e766462e969432e74e6b9c6a506c1f9622a64f74d0019272f20d3b4d22dc0ad3" podSandboxID="e766462e969432e74e6b9c6a506c1f9622a64f74d0019272f20d3b4d22dc0ad3" pod="default/hello-node-connect-74cf8bc446-45d4d"
May 11 23:17:45 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:45.407427 6148 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e766462e969432e74e6b9c6a506c1f9622a64f74d0019272f20d3b4d22dc0ad3"
May 11 23:17:45 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:45.407953 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-45d4d through plugin: invalid network status for"
May 11 23:17:46 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:46.502450 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-45d4d through plugin: invalid network status for"
May 11 23:18:19 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:19.117657 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-v7bjw through plugin: invalid network status for"
May 11 23:18:20 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:20.306238 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-v7bjw through plugin: invalid network status for"
May 11 23:18:34 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:34.003546 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-45d4d through plugin: invalid network status for"
May 11 23:18:39 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:39.501209 6148 topology_manager.go:200] "Topology Admit Handler"
May 11 23:18:39 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:39.692275 6148 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt42\" (UniqueName: \"kubernetes.io/projected/0c2db6df-37b9-4201-b3a9-44e6d839ff68-kube-api-access-cbt42\") pod \"hello-node-54fbb85-swswq\" (UID: \"0c2db6df-37b9-4201-b3a9-44e6d839ff68\") " pod="default/hello-node-54fbb85-swswq"
May 11 23:18:41 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:41.896771 6148 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bcbdd1bb4acb8e7c121f11d4744ce17b9f8e50f836089e58b1fa1e8b5735ff29"
May 11 23:18:41 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:41.897080 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-swswq through plugin: invalid network status for"
May 11 23:18:42 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:42.924303 6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-swswq through plugin: invalid network status for"
May 11 23:20:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:20:34.705922 6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
May 11 23:25:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:25:34.720662 6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
May 11 23:30:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:30:34.735541 6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
May 11 23:35:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:35:34.751063 6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
May 11 23:40:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:40:34.765005 6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
May 11 23:45:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:45:34.779955 6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
May 11 23:50:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:50:34.796315 6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [0b073da11a84] <==
* I0511 23:15:38.888473 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0511 23:15:38.891816 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> storage-provisioner [cbc60d45890a] <==
* I0511 23:15:52.429194 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0511 23:15:52.455864 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0511 23:15:52.455991 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0511 23:16:10.010847 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0511 23:16:10.011353 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f16db442-9f3a-4556-9cd7-8f812974ae3f", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220511231058-7184_e4da1a1e-b843-4d40-bf0b-49dd82e2a059 became leader
I0511 23:16:10.011542 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220511231058-7184_e4da1a1e-b843-4d40-bf0b-49dd82e2a059!
I0511 23:16:10.112171 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220511231058-7184_e4da1a1e-b843-4d40-bf0b-49dd82e2a059!
I0511 23:16:57.189570 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0511 23:16:57.189876 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 6daa51e5-b71b-4ede-a6c6-744ebb56aad5 460 0 2022-05-11 23:13:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-05-11 23:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-5ab57200-48da-4413-b811-626ed007f66e &PersistentVolumeClaim{ObjectMeta:{myclaim default 5ab57200-48da-4413-b811-626ed007f66e 723 0 2022-05-11 23:16:57 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-05-11 23:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-05-11 23:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0511 23:16:57.192717 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5ab57200-48da-4413-b811-626ed007f66e", APIVersion:"v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0511 23:16:57.192906 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-5ab57200-48da-4413-b811-626ed007f66e" provisioned
I0511 23:16:57.193054 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0511 23:16:57.193275 1 volume_store.go:212] Trying to save persistentvolume "pvc-5ab57200-48da-4413-b811-626ed007f66e"
I0511 23:16:57.208460 1 volume_store.go:219] persistentvolume "pvc-5ab57200-48da-4413-b811-626ed007f66e" saved
I0511 23:16:57.208684 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5ab57200-48da-4413-b811-626ed007f66e", APIVersion:"v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5ab57200-48da-4413-b811-626ed007f66e
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220511231058-7184 -n functional-20220511231058-7184
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220511231058-7184 -n functional-20220511231058-7184: (6.5100218s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-20220511231058-7184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-20220511231058-7184 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220511231058-7184 describe pod : exit status 1 (248.7123ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-20220511231058-7184 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (1963.92s)