=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run: kubectl --context functional-001800 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run: kubectl --context functional-001800 expose deployment hello-node --type=NodePort --port=8080
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-5fcdfb5cc4-b97dm" [4aa1dcdd-70ee-4b4e-a11f-0a9eec1bd844] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-5fcdfb5cc4-b97dm" [4aa1dcdd-70ee-4b4e-a11f-0a9eec1bd844] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 33.0805934s
functional_test.go:1449: (dbg) Run: out/minikube-windows-amd64.exe -p functional-001800 service list
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-001800 service list: (2.4275069s)
functional_test.go:1463: (dbg) Run: out/minikube-windows-amd64.exe -p functional-001800 service --namespace=default --https --url hello-node
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-001800 service --namespace=default --https --url hello-node: exit status 1 (34m16.8872119s)
-- stdout --
https://127.0.0.1:65060
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-001800 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run: kubectl --context functional-001800 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name: hello-node-5fcdfb5cc4-b97dm
Namespace: default
Priority: 0
Node: functional-001800/192.168.49.2
Start Time: Mon, 23 Jan 2023 03:32:54 +0000
Labels: app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations: <none>
Status: Running
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Controlled By: ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID: docker://6505715f773437bfd5dd35c12f521d7122e60a480df279c94dfd15858c44d9f0
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 23 Jan 2023 03:33:21 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gbq8 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-5gbq8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-5fcdfb5cc4-b97dm to functional-001800
Normal Pulling 34m kubelet, functional-001800 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 34m kubelet, functional-001800 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 24.4871129s
Normal Created 34m kubelet, functional-001800 Created container echoserver
Normal Started 34m kubelet, functional-001800 Started container echoserver
Name: hello-node-connect-6458c8fb6f-pvcnj
Namespace: default
Priority: 0
Node: functional-001800/192.168.49.2
Start Time: Mon, 23 Jan 2023 03:35:07 +0000
Labels: app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations: <none>
Status: Running
IP: 10.244.0.12
IPs:
IP: 10.244.0.12
Controlled By: ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID: docker://c19130cdfd32d439867a3a9163a97bd969ad0d5bb4cac403feccd37df5e7c422
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 23 Jan 2023 03:35:11 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w5b6m (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-w5b6m:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-6458c8fb6f-pvcnj to functional-001800
Normal Pulled 32m kubelet, functional-001800 Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal Created 32m kubelet, functional-001800 Created container echoserver
Normal Started 32m kubelet, functional-001800 Started container echoserver
functional_test.go:1412: (dbg) Run: kubectl --context functional-001800 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run: kubectl --context functional-001800 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.99.201.92
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30696/TCP
Endpoints: 10.244.0.8:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-001800
helpers_test.go:235: (dbg) docker inspect functional-001800:
-- stdout --
[
{
"Id": "b8efd25c456e9772d110a8e9a86aaf3ab5c180e09c993fb18701e0eec9a8f58a",
"Created": "2023-01-23T03:28:24.9015245Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 31657,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-01-23T03:28:25.7864306Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:243d3449b30fd2029b685cafa1191f13fbce109441e8c74001ff370d444b1927",
"ResolvConfPath": "/var/lib/docker/containers/b8efd25c456e9772d110a8e9a86aaf3ab5c180e09c993fb18701e0eec9a8f58a/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b8efd25c456e9772d110a8e9a86aaf3ab5c180e09c993fb18701e0eec9a8f58a/hostname",
"HostsPath": "/var/lib/docker/containers/b8efd25c456e9772d110a8e9a86aaf3ab5c180e09c993fb18701e0eec9a8f58a/hosts",
"LogPath": "/var/lib/docker/containers/b8efd25c456e9772d110a8e9a86aaf3ab5c180e09c993fb18701e0eec9a8f58a/b8efd25c456e9772d110a8e9a86aaf3ab5c180e09c993fb18701e0eec9a8f58a-json.log",
"Name": "/functional-001800",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-001800:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-001800",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/40985a7cb6b84025f02c3a827a976f85d86001f60f14fe01f59fde988bc82d01-init/diff:/var/lib/docker/overlay2/f9cccb2623e4cdfd2a6193778aaf2e1b55fbacbd12a69506577f5f883e8ed79d/diff:/var/lib/docker/overlay2/765b5f47eab6d8cc22fb6e638915ca924dc649e8ec378fb9b9f5051b7c360e8b/diff:/var/lib/docker/overlay2/6695b25f4b27677bb0fc38c7dc9428b86b31086feac0aecefc5e36f588de588b/diff:/var/lib/docker/overlay2/f4c159eb3a653ddaaa60e6edea5758593a3b4de0f02760346fcca651909a947d/diff:/var/lib/docker/overlay2/3e2271ea51b0a84c6f0f3ae8293250fd9d4c7b9ce4a409c4b8a8982c64759926/diff:/var/lib/docker/overlay2/eef6cf1635a83dbf84a2e416635aa97bbc1871c9db5cec3222616304c30ee5cf/diff:/var/lib/docker/overlay2/7c0008e3ad03134dd721433a1a51b7eda3ef9d98ad665ab78f1e7bd7942fc9a5/diff:/var/lib/docker/overlay2/b361f5d0872c3f14aa11b5084f07bbe50e2cb8f47ae64a475e2e747a06c2161a/diff:/var/lib/docker/overlay2/63246f8f201f15e6e4046a1380039cf2fc3a2d02897baf03f92f1f87a3107f81/diff:/var/lib/docker/overlay2/47aacb
18481705734f7eaa5a61b438502908d7f8ab93ec7b91e5fb145b782614/diff:/var/lib/docker/overlay2/09a4d727662fae91a24c4d741d8c87b44e628cb6d3c6dd2aa2c1dbd49b510082/diff:/var/lib/docker/overlay2/800e5a068b596491996eca15ee74cbba48345784c79e034786b9d93436de2efa/diff:/var/lib/docker/overlay2/2d6f637b7253394038e179774fac5576914998f848a617e0d92664afd6ae3b7d/diff:/var/lib/docker/overlay2/75dae2fc7bc40b13d781148ea2ee66df9f33560f990aa8fcef77bff0870795e4/diff:/var/lib/docker/overlay2/fa94aac5d9b452ac9eb8c48fac8e6c765b539de711f9203c3d287f580c62466f/diff:/var/lib/docker/overlay2/62ba601e9242b413af10265f047e29de36b6c09c2be57122a633644ffdef0c33/diff:/var/lib/docker/overlay2/62394d95f6c58f3b124f3857280e1dc56dd0899fb9923bcd16d4896382cb81de/diff:/var/lib/docker/overlay2/07b1cf8b88687f7395411000c4b7c8268b06b312b265a5c3680daec339c13070/diff:/var/lib/docker/overlay2/f398b87d34df75b91a3da28b15c36b55da1c18a010e132fd818e984928e1cecf/diff:/var/lib/docker/overlay2/e6ebe7258be073e069b2fb98f57c04abaffff6d7aeaf06d694c61f0569353034/diff:/var/lib/d
ocker/overlay2/e78fc778e3ffd7f89ac6aeecb21dbab2c8cad8a6a549a0e6c31f42d5a3dcf9bb/diff:/var/lib/docker/overlay2/3a5e58bace1f02d7877d90bad5c3c044d181a6c8f46615a1d4afc9eebd88b32f/diff:/var/lib/docker/overlay2/ceb28c3e3c331a4241c6dd6594db5ffa79db49a134d670d1abd5132c827a6103/diff:/var/lib/docker/overlay2/92c30cd425cba4d7760851bc25ea3fdcaa96a0468c9a90b8cc923c1e515872ad/diff:/var/lib/docker/overlay2/66fe5f022e7e36f0fbb77cb568f9b54fd1c673ff31729441c9c564e89bf92994/diff:/var/lib/docker/overlay2/83d52910571345d4972271333dbc00356fab01be75c94b67d42cec430f36795e/diff:/var/lib/docker/overlay2/c44040d9f03ad00c3d481079e580454437e622026652b65c4c431f333c3cdea5/diff:/var/lib/docker/overlay2/8f2aef52611f742c3b456fdb33f91f6fd3bb30a95fd20340a96cf14889f069b0/diff:/var/lib/docker/overlay2/7cc915f1d19840402f9d7affb626fa092e0ee8c3c5c29ca12d22a820af74c029/diff:/var/lib/docker/overlay2/8b9328361f99f663b4e1e139e7db15494fd97ec0016c4082a66edaa8bb95ba8d/diff:/var/lib/docker/overlay2/33c1dc61b92557604edbb42f67ee0d75ef8f9c396430bd9de024d6b078a
c1efa/diff:/var/lib/docker/overlay2/da578b6f33f1ffb7011d9527bd81eb40ef241b6e128e94691eb39c32325f4f6e/diff:/var/lib/docker/overlay2/1cf0e2e5a9ccbf23986275fcb30b32ff95802f0b8890d176f67becbc80be7914/diff:/var/lib/docker/overlay2/d9bf853dccddc66d2cd562f9849d93b6645254a499002a319e3bc34cb3df8e65/diff:/var/lib/docker/overlay2/542c7d8510c7a30bc23ab6f069a78d9ccbc5982cff11aca0a8ff68ef4714f5a2/diff:/var/lib/docker/overlay2/cae9fe71bb9de9b277f1e189b4908968aea31efb836cd88e215d36160eec2e22/diff:/var/lib/docker/overlay2/36489d5207493047e2fe040826afdb0fd7c7bc155bd728355728a5804a62008b/diff:/var/lib/docker/overlay2/2adbe1704e77c1e1a7416551d92216b17009b75bf9f7374dfbffdd20f8657b20/diff:/var/lib/docker/overlay2/104aea1ce577f3d0562b33eb5ddd6bfd5f6a60a99585b6a598cafbf34a6f3224/diff:/var/lib/docker/overlay2/447e6ef601634968e4ed2f1bb330f80b2e769298c772c071ccab3219e83f0b85/diff:/var/lib/docker/overlay2/9251270e04baab3dca59373e1cc96a4d08b75d9de0567832351d8df7e46f7e2b/diff:/var/lib/docker/overlay2/2c7aa75ef5f5e2ce21aa87787aeaac3a9c7582
cc9febd0cfff0da34b5d90f406/diff:/var/lib/docker/overlay2/9604db54b1e665078cf4b449c92eff062e70aea49644dcccc529e1c0a3eced59/diff:/var/lib/docker/overlay2/889a0d214f3e88cb5c4ccd23d8eff48a996baafd754d6bbc561e6f00527dda5b/diff",
"MergedDir": "/var/lib/docker/overlay2/40985a7cb6b84025f02c3a827a976f85d86001f60f14fe01f59fde988bc82d01/merged",
"UpperDir": "/var/lib/docker/overlay2/40985a7cb6b84025f02c3a827a976f85d86001f60f14fe01f59fde988bc82d01/diff",
"WorkDir": "/var/lib/docker/overlay2/40985a7cb6b84025f02c3a827a976f85d86001f60f14fe01f59fde988bc82d01/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-001800",
"Source": "/var/lib/docker/volumes/functional-001800/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-001800",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1673540226-15630@sha256:03c9592728381094cbd0ff9603f75ae6b485dd7a390c3e35f02ae5ec10f2f3ad",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-001800",
"name.minikube.sigs.k8s.io": "functional-001800",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "031570db3037d9b3d61758492dae9df6e89d7fcc76847f9b9ef93f58135f0f19",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64726"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64727"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64728"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64729"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "64730"
}
]
},
"SandboxKey": "/var/run/docker/netns/031570db3037",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-001800": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"b8efd25c456e",
"functional-001800"
],
"NetworkID": "cb11953c1ff0a9312d90a9221c6ddc0c8b03a87ff2a0f1fbaff2172522d4a6dd",
"EndpointID": "15fae0377d6107c18ae5fdbcc1e960796710648945b620cc32badb26f5db508b",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-001800 -n functional-001800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-001800 -n functional-001800: (1.4982961s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-001800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-001800 logs -n 25: (3.1177972s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| image | functional-001800 image save --daemon | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:34 GMT | 23 Jan 23 03:34 GMT |
| | gcr.io/google-containers/addon-resizer:functional-001800 | | | | | |
| start | -p functional-001800 | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:34 GMT | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| start | -p functional-001800 | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:34 GMT | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| start | -p functional-001800 --dry-run | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:34 GMT | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=docker | | | | | |
| ssh | functional-001800 ssh echo | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:34 GMT | 23 Jan 23 03:34 GMT |
| | hello | | | | | |
| ssh | functional-001800 ssh cat | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:34 GMT | 23 Jan 23 03:34 GMT |
| | /etc/hostname | | | | | |
| tunnel | functional-001800 tunnel | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:34 GMT | |
| | --alsologtostderr | | | | | |
| addons | functional-001800 addons list | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| addons | functional-001800 addons list | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | -o json | | | | | |
| profile | lis | minikube | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | |
| profile | list --output json | minikube | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| profile | list | minikube | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| profile | list -l | minikube | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| profile | list -o json | minikube | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| profile | list -o json --light | minikube | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| update-context | functional-001800 | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-001800 | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-001800 | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-001800 image ls | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | --format short | | | | | |
| image | functional-001800 image ls | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | --format yaml | | | | | |
| ssh | functional-001800 ssh pgrep | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | |
| | buildkitd | | | | | |
| image | functional-001800 image build -t | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | localhost/my-image:functional-001800 | | | | | |
| | testdata\build | | | | | |
| image | functional-001800 image ls | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| image | functional-001800 image ls | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | --format json | | | | | |
| image | functional-001800 image ls | functional-001800 | minikube8\jenkins | v1.28.0 | 23 Jan 23 03:35 GMT | 23 Jan 23 03:35 GMT |
| | --format table | | | | | |
|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/23 03:34:18
Running on machine: minikube8
Binary: Built with gc go1.19.5 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0123 03:34:18.705906 4196 out.go:296] Setting OutFile to fd 800 ...
I0123 03:34:18.771102 4196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0123 03:34:18.771172 4196 out.go:309] Setting ErrFile to fd 780...
I0123 03:34:18.771172 4196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0123 03:34:18.789084 4196 out.go:303] Setting JSON to false
I0123 03:34:18.791434 4196 start.go:125] hostinfo: {"hostname":"minikube8","uptime":7601,"bootTime":1674437257,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
W0123 03:34:18.791434 4196 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0123 03:34:18.798075 4196 out.go:177] * [functional-001800] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
I0123 03:34:18.801899 4196 notify.go:220] Checking for updates...
I0123 03:34:18.803840 4196 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
I0123 03:34:18.806367 4196 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0123 03:34:18.808670 4196 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
I0123 03:34:18.810801 4196 out.go:177] - MINIKUBE_LOCATION=master
I0123 03:34:18.813207 4196 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0123 03:34:18.816100 4196 config.go:180] Loaded profile config "functional-001800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0123 03:34:18.817008 4196 driver.go:365] Setting default libvirt URI to qemu:///system
I0123 03:34:19.137520 4196 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0123 03:34:19.152801 4196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0123 03:34:19.761275 4196 info.go:266] docker info: {ID:PGGX:PUTV:UBOY:U7ZV:6J57:ER3U:ZRMQ:KNRO:ZPOS:BWR3:LBL6:WG2H Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:56 SystemTime:2023-01-23 03:34:19.3204126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0123 03:34:19.775249 4196 out.go:177] * Using the docker driver based on existing profile
I0123 03:34:19.777280 4196 start.go:296] selected driver: docker
I0123 03:34:19.777280 4196 start.go:840] validating driver "docker" against &{Name:functional-001800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1673540226-15630@sha256:03c9592728381094cbd0ff9603f75ae6b485dd7a390c3e35f02ae5ec10f2f3ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-001800 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:f
alse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0123 03:34:19.777280 4196 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0123 03:34:19.795262 4196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0123 03:34:20.430293 4196 info.go:266] docker info: {ID:PGGX:PUTV:UBOY:U7ZV:6J57:ER3U:ZRMQ:KNRO:ZPOS:BWR3:LBL6:WG2H Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:56 SystemTime:2023-01-23 03:34:19.964868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0123 03:34:20.482778 4196 cni.go:84] Creating CNI manager for ""
I0123 03:34:20.482778 4196 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0123 03:34:20.482778 4196 start_flags.go:319] config:
{Name:functional-001800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1673540226-15630@sha256:03c9592728381094cbd0ff9603f75ae6b485dd7a390c3e35f02ae5ec10f2f3ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-001800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false
storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0123 03:34:20.487804 4196 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Mon 2023-01-23 03:28:26 UTC, end at Mon 2023-01-23 04:07:51 UTC. --
Jan 23 03:31:23 functional-001800 systemd[1]: Started Docker Application Container Engine.
Jan 23 03:31:23 functional-001800 dockerd[9877]: time="2023-01-23T03:31:23.677988200Z" level=info msg="API listen on [::]:2376"
Jan 23 03:31:23 functional-001800 dockerd[9877]: time="2023-01-23T03:31:23.694708000Z" level=info msg="API listen on /var/run/docker.sock"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.638392400Z" level=info msg="ignoring event" container=e609645ecc1b8bd814e17f07fb6ccc0d01da9a1876558305ed1728f78cbbc7a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.638461300Z" level=info msg="ignoring event" container=ba3ba6b2a2d042a1a7e1057960e4670bc8bcc1d8376565aa53a9eba4298143fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.638502300Z" level=info msg="ignoring event" container=4ed3598092ac05d5e4431db1f467283d197d7b77e4848a862db0749ad358f4fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.740155800Z" level=info msg="ignoring event" container=a1aaa862435950093f3f85c572494540f519f8873b61d993d23b2d444aa5a9e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.740251100Z" level=info msg="ignoring event" container=02b80bfa98fe15d2b678cf67740ef6c3247a978333cb80b8f3495035d62c9d34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.741962500Z" level=info msg="ignoring event" container=88f523493d5e8471c2d55fb100b7ba701ac8a741e025d038df08a03fb02ee2d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.743062700Z" level=info msg="ignoring event" container=a7ce620fa880aab1814723b23d74984b8d9aa62cffc0d388e306ae8eff6edd7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.839141700Z" level=info msg="ignoring event" container=1cdc7fca1f1c0b4c2793840eab440456d199cb5897f1ee85242412ec1a23c887 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.839194800Z" level=info msg="ignoring event" container=2a50f77e26ac169a43ddef9928b373815f046f6572e785f1b4057aa3e51f9fe3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:38 functional-001800 dockerd[9877]: time="2023-01-23T03:31:38.941115200Z" level=info msg="ignoring event" container=4486f1435426c46fbc073ba405861f95ff9d6408907190589268cf47360aa80f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:39 functional-001800 dockerd[9877]: time="2023-01-23T03:31:39.039768300Z" level=info msg="ignoring event" container=abf65b1ac221a22d9ec099a87a46a1c13cf96ecc63a828fe8853aedc9e044784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:41 functional-001800 dockerd[9877]: time="2023-01-23T03:31:41.114278000Z" level=info msg="ignoring event" container=fa771b75a7f50dcea00d44f30b718a9673d2e615149109bb6f83428d9e4fc828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:43 functional-001800 dockerd[9877]: time="2023-01-23T03:31:43.568791000Z" level=info msg="ignoring event" container=c1bc7ef23b8e8562df7d85a7e2a7cfb4eab2c8df172db460a40bebabc5c08633 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:50 functional-001800 dockerd[9877]: time="2023-01-23T03:31:50.252971300Z" level=info msg="ignoring event" container=34fa05276e2407377ed190b6ab404d7e6acfdd298677a9187a4b441c449c9f6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:52 functional-001800 dockerd[9877]: time="2023-01-23T03:31:52.645740300Z" level=info msg="ignoring event" container=108b5c35effec580630841c0359526ad5edcfb5ac3f01473a81c3ffddfdd24d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:56 functional-001800 dockerd[9877]: time="2023-01-23T03:31:56.556641700Z" level=info msg="ignoring event" container=8d5a460019e6ba67d9abdb3ef2374f7ccd693cc8bbfa3e6be11e7fb6b0584883 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:31:56 functional-001800 dockerd[9877]: time="2023-01-23T03:31:56.676781900Z" level=info msg="ignoring event" container=fd6b6170dad337eb7fd3666358a9e590717fa27213e11fb33587868b3b487243 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:32:05 functional-001800 dockerd[9877]: time="2023-01-23T03:32:05.703635600Z" level=info msg="ignoring event" container=348cecbcd567510e7b8cba5b3a59fc45a751f7af42982ed33b35c421f09534e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:35:31 functional-001800 dockerd[9877]: time="2023-01-23T03:35:31.131215800Z" level=info msg="ignoring event" container=279dd7860d217b739d7d1f5b421224375533ba50fc12f922289fc6482affacf8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:35:31 functional-001800 dockerd[9877]: time="2023-01-23T03:35:31.475529900Z" level=info msg="ignoring event" container=ba7975d22409bfe222e341af81f6813f8a31df31cb3aa408b11338da2e984152 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:35:37 functional-001800 dockerd[9877]: time="2023-01-23T03:35:37.114070500Z" level=info msg="ignoring event" container=e52a9bc0de02a44be7b08a8725e132ce0c921583e99be16c911919b510a710c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 23 03:35:37 functional-001800 dockerd[9877]: time="2023-01-23T03:35:37.678921100Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
791ac281bd689 nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e 32 minutes ago Running myfrontend 0 810f37efeefd0
c19130cdfd32d 82e4c8a736a4f 32 minutes ago Running echoserver 0 8d09546f52537
d6616490a1700 nginx@sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 33 minutes ago Running nginx 0 bcbcd15a82bbc
754eb3a9408a1 mysql@sha256:f04fc2e2f01e65d6e2828b4cce2c4761d9258aee71d989e273b2ae309f44a945 33 minutes ago Running mysql 0 835a8c38c1840
6505715f77343 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 34 minutes ago Running echoserver 0 af96957022ddf
d7b1caa957dae 5185b96f0becf 35 minutes ago Running coredns 3 eaf4393b85013
971cd088f6f59 beaaf00edd38a 35 minutes ago Running kube-proxy 3 518bec2d17207
0977221906c6d 6e38f40d628db 35 minutes ago Running storage-provisioner 4 b953f2cb22aca
35aeb1a148021 6039992312758 35 minutes ago Running kube-controller-manager 4 ad70c76f77386
c07f93f4c9aac 0346dbd74bcb9 35 minutes ago Running kube-apiserver 2 e3abf7e2a840b
108b5c35effec 0346dbd74bcb9 36 minutes ago Exited kube-apiserver 1 e3abf7e2a840b
58a0a5f462cf1 6d23ec0e8b87e 36 minutes ago Running kube-scheduler 4 9572a35fc3aea
348cecbcd5675 6039992312758 36 minutes ago Exited kube-controller-manager 3 ad70c76f77386
6f74b3ccb9a9c a8a176a5d5d69 36 minutes ago Running etcd 3 a308703310e31
c1bc7ef23b8e8 5185b96f0becf 36 minutes ago Exited coredns 2 a1aaa86243595
2a50f77e26ac1 beaaf00edd38a 36 minutes ago Exited kube-proxy 2 88f523493d5e8
abf65b1ac221a a8a176a5d5d69 36 minutes ago Exited etcd 2 a7ce620fa880a
1cdc7fca1f1c0 6d23ec0e8b87e 36 minutes ago Exited kube-scheduler 3 e609645ecc1b8
80150afc5b0b7 6e38f40d628db 37 minutes ago Exited storage-provisioner 3 758442a04690f
*
* ==> coredns [c1bc7ef23b8e] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:55842 - 56634 "HINFO IN 720660099831472606.3717681673270017393. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.0318741s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
*
* ==> coredns [d7b1caa957da] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:36864 - 22631 "HINFO IN 7213689854591112414.543287787351214437. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.0473972s
*
* ==> describe nodes <==
* Name: functional-001800
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-001800
kubernetes.io/os=linux
minikube.k8s.io/commit=46bccc7defca8fce9c90f760cdf14026855d957a
minikube.k8s.io/name=functional-001800
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_23T03_29_10_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 23 Jan 2023 03:29:05 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-001800
AcquireTime: <unset>
RenewTime: Mon, 23 Jan 2023 04:07:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 23 Jan 2023 04:06:29 +0000 Mon, 23 Jan 2023 03:29:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Jan 2023 04:06:29 +0000 Mon, 23 Jan 2023 03:29:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 23 Jan 2023 04:06:29 +0000 Mon, 23 Jan 2023 03:29:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 23 Jan 2023 04:06:29 +0000 Mon, 23 Jan 2023 03:29:22 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-001800
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: ef1a7ee09e1444ae8e9e6266232336e1
System UUID: ef1a7ee09e1444ae8e9e6266232336e1
Boot ID: 4751ebde-5db3-4348-b639-ea7d280af3e6
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.22
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-5fcdfb5cc4-b97dm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default hello-node-connect-6458c8fb6f-pvcnj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32m
default mysql-596b7fcdbf-bppt4 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 34m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32m
kube-system coredns-565d847f94-mf4xz 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-001800 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-001800 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
kube-system kube-controller-manager-functional-001800 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-6l4jd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-001800 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 38m kube-proxy
Normal Starting 35m kube-proxy
Normal Starting 37m kube-proxy
Normal Starting 39m kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 39m (x4 over 39m) kubelet Node functional-001800 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 39m (x4 over 39m) kubelet Node functional-001800 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 39m (x3 over 39m) kubelet Node functional-001800 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 39m kubelet Updated Node Allocatable limit across pods
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 38m kubelet Node functional-001800 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-001800 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-001800 status is now: NodeHasSufficientPID
Normal NodeNotReady 38m kubelet Node functional-001800 status is now: NodeNotReady
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 38m node-controller Node functional-001800 event: Registered Node functional-001800 in Controller
Normal NodeReady 38m kubelet Node functional-001800 status is now: NodeReady
Normal RegisteredNode 37m node-controller Node functional-001800 event: Registered Node functional-001800 in Controller
Normal Starting 36m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 36m (x8 over 36m) kubelet Node functional-001800 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 36m (x8 over 36m) kubelet Node functional-001800 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 36m (x7 over 36m) kubelet Node functional-001800 status is now: NodeHasSufficientPID
Normal RegisteredNode 35m node-controller Node functional-001800 event: Registered Node functional-001800 in Controller
*
* ==> dmesg <==
* [Jan23 03:42] WSL2: Performing memory compaction.
[Jan23 03:43] WSL2: Performing memory compaction.
[Jan23 03:44] WSL2: Performing memory compaction.
[Jan23 03:45] WSL2: Performing memory compaction.
[Jan23 03:46] WSL2: Performing memory compaction.
[Jan23 03:47] WSL2: Performing memory compaction.
[Jan23 03:48] WSL2: Performing memory compaction.
[Jan23 03:49] WSL2: Performing memory compaction.
[Jan23 03:50] WSL2: Performing memory compaction.
[Jan23 03:51] WSL2: Performing memory compaction.
[Jan23 03:52] WSL2: Performing memory compaction.
[Jan23 03:53] WSL2: Performing memory compaction.
[Jan23 03:54] WSL2: Performing memory compaction.
[Jan23 03:55] WSL2: Performing memory compaction.
[Jan23 03:56] WSL2: Performing memory compaction.
[Jan23 03:57] WSL2: Performing memory compaction.
[Jan23 03:58] WSL2: Performing memory compaction.
[Jan23 03:59] WSL2: Performing memory compaction.
[Jan23 04:00] WSL2: Performing memory compaction.
[Jan23 04:01] WSL2: Performing memory compaction.
[Jan23 04:03] WSL2: Performing memory compaction.
[Jan23 04:04] WSL2: Performing memory compaction.
[Jan23 04:05] WSL2: Performing memory compaction.
[Jan23 04:06] WSL2: Performing memory compaction.
[Jan23 04:07] WSL2: Performing memory compaction.
*
* ==> etcd [6f74b3ccb9a9] <==
* {"level":"warn","ts":"2023-01-23T03:35:22.659Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"705.6422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13529"}
{"level":"info","ts":"2023-01-23T03:35:22.659Z","caller":"traceutil/trace.go:171","msg":"trace[694962609] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:910; }","duration":"705.7145ms","start":"2023-01-23T03:35:21.954Z","end":"2023-01-23T03:35:22.659Z","steps":["trace[694962609] 'agreement among raft nodes before linearized reading' (duration: 705.2985ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-23T03:35:22.659Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-23T03:35:21.954Z","time spent":"705.8229ms","remote":"127.0.0.1:53480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13553,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2023-01-23T03:35:22.662Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"274.1206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-01-23T03:35:22.663Z","caller":"traceutil/trace.go:171","msg":"trace[1993913597] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:911; }","duration":"274.3374ms","start":"2023-01-23T03:35:22.388Z","end":"2023-01-23T03:35:22.663Z","steps":["trace[1993913597] 'agreement among raft nodes before linearized reading' (duration: 274.0873ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-23T03:35:22.663Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.4255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-01-23T03:35:22.663Z","caller":"traceutil/trace.go:171","msg":"trace[172688700] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:911; }","duration":"101.8506ms","start":"2023-01-23T03:35:22.561Z","end":"2023-01-23T03:35:22.663Z","steps":["trace[172688700] 'agreement among raft nodes before linearized reading' (duration: 101.3961ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-23T03:35:40.251Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.8503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13630"}
{"level":"info","ts":"2023-01-23T03:35:40.337Z","caller":"traceutil/trace.go:171","msg":"trace[489483934] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:940; }","duration":"195.5593ms","start":"2023-01-23T03:35:40.141Z","end":"2023-01-23T03:35:40.337Z","steps":["trace[489483934] 'agreement among raft nodes before linearized reading' (duration: 96.2148ms)"],"step_count":1}
{"level":"info","ts":"2023-01-23T03:42:13.751Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1006}
{"level":"info","ts":"2023-01-23T03:42:13.789Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1006,"took":"38.2962ms"}
{"level":"info","ts":"2023-01-23T03:47:13.767Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1217}
{"level":"info","ts":"2023-01-23T03:47:13.768Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1217,"took":"533.9µs"}
{"level":"warn","ts":"2023-01-23T03:49:18.046Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.8641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-public\" ","response":"range_response_count:1 size:351"}
{"level":"info","ts":"2023-01-23T03:49:18.047Z","caller":"traceutil/trace.go:171","msg":"trace[14778373] range","detail":"{range_begin:/registry/namespaces/kube-public; range_end:; response_count:1; response_revision:1513; }","duration":"101.2929ms","start":"2023-01-23T03:49:17.945Z","end":"2023-01-23T03:49:18.047Z","steps":["trace[14778373] 'range keys from in-memory index tree' (duration: 100.6431ms)"],"step_count":1}
{"level":"info","ts":"2023-01-23T03:52:13.779Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1427}
{"level":"info","ts":"2023-01-23T03:52:13.780Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1427,"took":"543.1µs"}
{"level":"info","ts":"2023-01-23T03:57:13.801Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1638}
{"level":"info","ts":"2023-01-23T03:57:13.802Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1638,"took":"643.1µs"}
{"level":"info","ts":"2023-01-23T04:02:13.817Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1848}
{"level":"info","ts":"2023-01-23T04:02:13.818Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1848,"took":"598.7µs"}
{"level":"warn","ts":"2023-01-23T04:05:37.363Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.6983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2023-01-23T04:05:37.364Z","caller":"traceutil/trace.go:171","msg":"trace[1473226278] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:2199; }","duration":"115.9789ms","start":"2023-01-23T04:05:37.248Z","end":"2023-01-23T04:05:37.364Z","steps":["trace[1473226278] 'agreement among raft nodes before linearized reading' (duration: 99.7624ms)"],"step_count":1}
{"level":"info","ts":"2023-01-23T04:07:13.831Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2058}
{"level":"info","ts":"2023-01-23T04:07:13.832Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2058,"took":"638.8µs"}
*
* ==> etcd [abf65b1ac221] <==
* {"level":"info","ts":"2023-01-23T03:31:28.557Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2023-01-23T03:31:28.557Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2023-01-23T03:31:28.557Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-23T03:31:29.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
{"level":"info","ts":"2023-01-23T03:31:29.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
{"level":"info","ts":"2023-01-23T03:31:29.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
{"level":"info","ts":"2023-01-23T03:31:29.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
{"level":"info","ts":"2023-01-23T03:31:29.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
{"level":"info","ts":"2023-01-23T03:31:29.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
{"level":"info","ts":"2023-01-23T03:31:29.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
{"level":"info","ts":"2023-01-23T03:31:29.854Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-001800 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-23T03:31:29.854Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-23T03:31:29.854Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-23T03:31:29.854Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-23T03:31:29.856Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-23T03:31:29.858Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-23T03:31:29.859Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2023-01-23T03:31:38.435Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-01-23T03:31:38.436Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-001800","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2023/01/23 03:31:38 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2023/01/23 03:31:38 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2023-01-23T03:31:38.535Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2023-01-23T03:31:38.643Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2023-01-23T03:31:38.644Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2023-01-23T03:31:38.644Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-001800","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> kernel <==
* 04:07:52 up 57 min, 0 users, load average: 0.77, 0.69, 0.68
Linux functional-001800 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [108b5c35effe] <==
* I0123 03:31:52.537941 1 server.go:563] external host was not specified, using 192.168.49.2
I0123 03:31:52.539517 1 server.go:161] Version: v1.25.3
I0123 03:31:52.539652 1 server.go:163] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
E0123 03:31:52.540134 1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
*
* ==> kube-apiserver [c07f93f4c9aa] <==
* I0123 03:32:55.158599 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.99.201.92]
I0123 03:33:05.037015 1 trace.go:205] Trace[622873674]: "List(recursive=true) etcd3" audit-id:950ca6d2-d83f-48d9-91a0-190872f89801,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (23-Jan-2023 03:33:04.278) (total time: 758ms):
Trace[622873674]: [758.0143ms] [758.0143ms] END
I0123 03:33:05.041201 1 trace.go:205] Trace[1639692585]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:950ca6d2-d83f-48d9-91a0-190872f89801,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (23-Jan-2023 03:33:04.278) (total time: 761ms):
Trace[1639692585]: ---"Listing from storage done" 758ms (03:33:05.037)
Trace[1639692585]: [761.6167ms] [761.6167ms] END
I0123 03:33:14.548794 1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.96.220.87]
I0123 03:34:10.252049 1 trace.go:205] Trace[1834632288]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints (23-Jan-2023 03:34:08.944) (total time: 1307ms):
Trace[1834632288]: ---"Txn call finished" err:<nil> 1300ms (03:34:10.251)
Trace[1834632288]: [1.3075852s] [1.3075852s] END
I0123 03:34:10.252441 1 trace.go:205] Trace[1009440327]: "List(recursive=true) etcd3" audit-id:5e2096b0-12a1-4e29-b26f-9cd989f4c227,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (23-Jan-2023 03:34:08.950) (total time: 1301ms):
Trace[1009440327]: [1.3015337s] [1.3015337s] END
I0123 03:34:10.252471 1 trace.go:205] Trace[1179048111]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:bdc560a5-4d11-441d-916c-b210af5cf810,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (23-Jan-2023 03:34:09.443) (total time: 808ms):
Trace[1179048111]: ---"About to write a response" 808ms (03:34:10.252)
Trace[1179048111]: [808.8396ms] [808.8396ms] END
I0123 03:34:10.253548 1 trace.go:205] Trace[491454145]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:5e2096b0-12a1-4e29-b26f-9cd989f4c227,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (23-Jan-2023 03:34:08.950) (total time: 1302ms):
Trace[491454145]: ---"Listing from storage done" 1301ms (03:34:10.252)
Trace[491454145]: [1.3026728s] [1.3026728s] END
I0123 03:34:24.948018 1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.106.229.147]
I0123 03:35:07.937032 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.100.202.198]
I0123 03:35:22.661703 1 trace.go:205] Trace[1585375679]: "List(recursive=true) etcd3" audit-id:d9496d9c-6082-4781-aa95-83de0697e47d,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (23-Jan-2023 03:35:21.952) (total time: 708ms):
Trace[1585375679]: [708.9962ms] [708.9962ms] END
I0123 03:35:22.662630 1 trace.go:205] Trace[1934079069]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:d9496d9c-6082-4781-aa95-83de0697e47d,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (23-Jan-2023 03:35:21.952) (total time: 709ms):
Trace[1934079069]: ---"Listing from storage done" 709ms (03:35:22.661)
Trace[1934079069]: [709.9599ms] [709.9599ms] END
*
* ==> kube-controller-manager [348cecbcd567] <==
* I0123 03:31:50.550408 1 serving.go:348] Generated self-signed cert in-memory
I0123 03:31:52.888930 1 controllermanager.go:178] Version: v1.25.3
I0123 03:31:52.889084 1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0123 03:31:52.891029 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0123 03:31:52.891130 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0123 03:31:52.891056 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0123 03:31:52.891096 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
F0123 03:32:05.638861 1 controllermanager.go:221] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
*
* ==> kube-controller-manager [35aeb1a14802] <==
* I0123 03:32:41.354638 1 shared_informer.go:262] Caches are synced for GC
I0123 03:32:41.435992 1 shared_informer.go:262] Caches are synced for node
I0123 03:32:41.436155 1 range_allocator.go:166] Starting range CIDR allocator
I0123 03:32:41.436170 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0123 03:32:41.436190 1 shared_informer.go:262] Caches are synced for cidrallocator
I0123 03:32:41.436336 1 shared_informer.go:262] Caches are synced for daemon sets
I0123 03:32:41.436525 1 shared_informer.go:262] Caches are synced for TTL
I0123 03:32:41.437035 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0123 03:32:41.436528 1 shared_informer.go:262] Caches are synced for resource quota
I0123 03:32:41.439944 1 shared_informer.go:262] Caches are synced for attach detach
I0123 03:32:41.536185 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0123 03:32:41.536125 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I0123 03:32:41.536330 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I0123 03:32:41.536579 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0123 03:32:41.536706 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I0123 03:32:41.849821 1 shared_informer.go:262] Caches are synced for garbage collector
I0123 03:32:41.849936 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0123 03:32:41.849978 1 shared_informer.go:262] Caches are synced for garbage collector
I0123 03:32:54.547279 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
I0123 03:32:54.641417 1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-b97dm"
I0123 03:33:14.838730 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
I0123 03:33:14.944692 1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-bppt4"
I0123 03:34:56.256508 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0123 03:35:07.591971 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
I0123 03:35:07.643082 1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-pvcnj"
*
* ==> kube-proxy [2a50f77e26ac] <==
* E0123 03:31:29.741819 1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I0123 03:31:29.745604 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0123 03:31:29.750308 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0123 03:31:29.754671 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0123 03:31:29.759454 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0123 03:31:29.835918 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0123 03:31:29.840121 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-001800": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:30.890948 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-001800": dial tcp 192.168.49.2:8441: connect: connection refused
*
* ==> kube-proxy [971cd088f6f5] <==
* I0123 03:32:34.872118 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0123 03:32:34.935990 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0123 03:32:34.939040 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0123 03:32:34.941658 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0123 03:32:34.944149 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I0123 03:32:34.960527 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0123 03:32:34.960894 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0123 03:32:34.961215 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0123 03:32:35.074682 1 server_others.go:206] "Using iptables Proxier"
I0123 03:32:35.074858 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0123 03:32:35.074875 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0123 03:32:35.074896 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0123 03:32:35.074918 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0123 03:32:35.075362 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0123 03:32:35.076173 1 server.go:661] "Version info" version="v1.25.3"
I0123 03:32:35.076335 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0123 03:32:35.077142 1 config.go:317] "Starting service config controller"
I0123 03:32:35.077252 1 shared_informer.go:255] Waiting for caches to sync for service config
I0123 03:32:35.077299 1 config.go:444] "Starting node config controller"
I0123 03:32:35.077312 1 shared_informer.go:255] Waiting for caches to sync for node config
I0123 03:32:35.077662 1 config.go:226] "Starting endpoint slice config controller"
I0123 03:32:35.077761 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0123 03:32:35.177580 1 shared_informer.go:262] Caches are synced for node config
I0123 03:32:35.177743 1 shared_informer.go:262] Caches are synced for service config
I0123 03:32:35.178980 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [1cdc7fca1f1c] <==
* W0123 03:31:31.955946 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:31.955994 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:31.956012 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:31.956088 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:31.956106 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:31.956139 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:31.956265 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:31.956330 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:31.956339 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:31.956392 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:31.956484 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:31.956535 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:32.035136 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:32.035295 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:32.035211 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:32.035353 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:32.035424 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:32.035505 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:31:32.035550 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:31:32.035727 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
I0123 03:31:38.536698 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0123 03:31:38.536722 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0123 03:31:38.536737 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0123 03:31:38.536779 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0123 03:31:38.537218 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [58a0a5f462cf] <==
* E0123 03:32:06.998093 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:32:09.706525 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:32:09.706674 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:32:10.362550 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8441/api/v1/persistentvolumeclaims?resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:32:10.362691 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8441/api/v1/persistentvolumeclaims?resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:32:11.673599 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:32:11.673789 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:32:12.503129 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:32:12.503276 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:32:12.652559 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
E0123 03:32:12.652704 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=560": dial tcp 192.168.49.2:8441: connect: connection refused
W0123 03:32:17.338042 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0123 03:32:17.338103 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0123 03:32:17.338283 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0123 03:32:17.338326 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0123 03:32:17.338818 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0123 03:32:17.338848 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0123 03:32:17.345419 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0123 03:32:17.345544 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0123 03:32:17.354036 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0123 03:32:17.354057 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0123 03:32:17.338848 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0123 03:32:17.436771 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0123 03:32:17.436894 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0123 03:32:17.436963 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
*
* ==> kubelet <==
* -- Logs begin at Mon 2023-01-23 03:28:26 UTC, end at Mon 2023-01-23 04:07:53 UTC. --
Jan 23 03:34:58 functional-001800 kubelet[12183]: I0123 03:34:58.195818 12183 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ba7975d22409bfe222e341af81f6813f8a31df31cb3aa408b11338da2e984152"
Jan 23 03:35:07 functional-001800 kubelet[12183]: I0123 03:35:07.660417 12183 topology_manager.go:205] "Topology Admit Handler"
Jan 23 03:35:07 functional-001800 kubelet[12183]: I0123 03:35:07.839778 12183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5b6m\" (UniqueName: \"kubernetes.io/projected/396fa06e-cfb1-43d8-bb83-d2205f80086d-kube-api-access-w5b6m\") pod \"hello-node-connect-6458c8fb6f-pvcnj\" (UID: \"396fa06e-cfb1-43d8-bb83-d2205f80086d\") " pod="default/hello-node-connect-6458c8fb6f-pvcnj"
Jan 23 03:35:09 functional-001800 kubelet[12183]: I0123 03:35:09.674841 12183 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8d09546f52537b4995a960bb2a3351a8f3a2497f7b3d1819efd724b7cddae401"
Jan 23 03:35:31 functional-001800 kubelet[12183]: I0123 03:35:31.666919 12183 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b045f91-7b24-4a3b-9f0b-8a02388983f0-pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410" (OuterVolumeSpecName: "mypd") pod "5b045f91-7b24-4a3b-9f0b-8a02388983f0" (UID: "5b045f91-7b24-4a3b-9f0b-8a02388983f0"). InnerVolumeSpecName "pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 23 03:35:31 functional-001800 kubelet[12183]: I0123 03:35:31.669765 12183 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/5b045f91-7b24-4a3b-9f0b-8a02388983f0-pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410\") pod \"5b045f91-7b24-4a3b-9f0b-8a02388983f0\" (UID: \"5b045f91-7b24-4a3b-9f0b-8a02388983f0\") "
Jan 23 03:35:31 functional-001800 kubelet[12183]: I0123 03:35:31.669830 12183 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfftb\" (UniqueName: \"kubernetes.io/projected/5b045f91-7b24-4a3b-9f0b-8a02388983f0-kube-api-access-dfftb\") pod \"5b045f91-7b24-4a3b-9f0b-8a02388983f0\" (UID: \"5b045f91-7b24-4a3b-9f0b-8a02388983f0\") "
Jan 23 03:35:31 functional-001800 kubelet[12183]: I0123 03:35:31.673576 12183 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b045f91-7b24-4a3b-9f0b-8a02388983f0-kube-api-access-dfftb" (OuterVolumeSpecName: "kube-api-access-dfftb") pod "5b045f91-7b24-4a3b-9f0b-8a02388983f0" (UID: "5b045f91-7b24-4a3b-9f0b-8a02388983f0"). InnerVolumeSpecName "kube-api-access-dfftb". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 23 03:35:31 functional-001800 kubelet[12183]: I0123 03:35:31.770645 12183 reconciler.go:399] "Volume detached for volume \"pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410\" (UniqueName: \"kubernetes.io/host-path/5b045f91-7b24-4a3b-9f0b-8a02388983f0-pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410\") on node \"functional-001800\" DevicePath \"\""
Jan 23 03:35:31 functional-001800 kubelet[12183]: I0123 03:35:31.770774 12183 reconciler.go:399] "Volume detached for volume \"kube-api-access-dfftb\" (UniqueName: \"kubernetes.io/projected/5b045f91-7b24-4a3b-9f0b-8a02388983f0-kube-api-access-dfftb\") on node \"functional-001800\" DevicePath \"\""
Jan 23 03:35:32 functional-001800 kubelet[12183]: I0123 03:35:32.539150 12183 scope.go:115] "RemoveContainer" containerID="279dd7860d217b739d7d1f5b421224375533ba50fc12f922289fc6482affacf8"
Jan 23 03:35:33 functional-001800 kubelet[12183]: I0123 03:35:33.053614 12183 topology_manager.go:205] "Topology Admit Handler"
Jan 23 03:35:33 functional-001800 kubelet[12183]: E0123 03:35:33.054036 12183 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="5b045f91-7b24-4a3b-9f0b-8a02388983f0" containerName="myfrontend"
Jan 23 03:35:33 functional-001800 kubelet[12183]: I0123 03:35:33.054251 12183 memory_manager.go:345] "RemoveStaleState removing state" podUID="5b045f91-7b24-4a3b-9f0b-8a02388983f0" containerName="myfrontend"
Jan 23 03:35:33 functional-001800 kubelet[12183]: I0123 03:35:33.243428 12183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410\" (UniqueName: \"kubernetes.io/host-path/30c90978-0323-4713-9ee4-4c7266968cf5-pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410\") pod \"sp-pod\" (UID: \"30c90978-0323-4713-9ee4-4c7266968cf5\") " pod="default/sp-pod"
Jan 23 03:35:33 functional-001800 kubelet[12183]: I0123 03:35:33.243659 12183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkhbt\" (UniqueName: \"kubernetes.io/projected/30c90978-0323-4713-9ee4-4c7266968cf5-kube-api-access-gkhbt\") pod \"sp-pod\" (UID: \"30c90978-0323-4713-9ee4-4c7266968cf5\") " pod="default/sp-pod"
Jan 23 03:35:33 functional-001800 kubelet[12183]: I0123 03:35:33.569561 12183 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="810f37efeefd0a55e42b7ce2a5e7158e875905183069ab9361f301a4a2ae2dcf"
Jan 23 03:35:34 functional-001800 kubelet[12183]: I0123 03:35:34.478973 12183 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5b045f91-7b24-4a3b-9f0b-8a02388983f0 path="/var/lib/kubelet/pods/5b045f91-7b24-4a3b-9f0b-8a02388983f0/volumes"
Jan 23 03:36:46 functional-001800 kubelet[12183]: W0123 03:36:46.750609 12183 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 23 03:41:46 functional-001800 kubelet[12183]: W0123 03:41:46.757035 12183 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 23 03:46:46 functional-001800 kubelet[12183]: W0123 03:46:46.756309 12183 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 23 03:51:46 functional-001800 kubelet[12183]: W0123 03:51:46.754564 12183 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 23 03:56:46 functional-001800 kubelet[12183]: W0123 03:56:46.757520 12183 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 23 04:01:46 functional-001800 kubelet[12183]: W0123 04:01:46.757109 12183 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 23 04:06:46 functional-001800 kubelet[12183]: W0123 04:06:46.759596 12183 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [0977221906c6] <==
* I0123 03:32:34.051475 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0123 03:32:34.070012 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0123 03:32:34.070155 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0123 03:32:51.494072 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0123 03:32:51.494420 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6a06792-96c4-4156-b40f-1c0bc270813a", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-001800_127801ce-9e0f-4dee-b49c-111605bdd35c became leader
I0123 03:32:51.494554 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-001800_127801ce-9e0f-4dee-b49c-111605bdd35c!
I0123 03:32:51.595149 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-001800_127801ce-9e0f-4dee-b49c-111605bdd35c!
I0123 03:34:56.256195 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0123 03:34:56.256508 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 94cec73e-48b8-4b8a-a66e-4a76dd5d031f 385 0 2023-01-23 03:29:28 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-01-23 03:29:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410 &PersistentVolumeClaim{ObjectMeta:{myclaim default 9702d086-36d2-47b5-b8e8-d60dc4ad4410 853 0 2023-01-23 03:34:56 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2023-01-23 03:34:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2023-01-23 03:34:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0123 03:34:56.257300 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9702d086-36d2-47b5-b8e8-d60dc4ad4410", APIVersion:"v1", ResourceVersion:"853", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0123 03:34:56.257380 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410" provisioned
I0123 03:34:56.257632 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0123 03:34:56.257646 1 volume_store.go:212] Trying to save persistentvolume "pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410"
I0123 03:34:56.273202 1 volume_store.go:219] persistentvolume "pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410" saved
I0123 03:34:56.273354 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9702d086-36d2-47b5-b8e8-d60dc4ad4410", APIVersion:"v1", ResourceVersion:"853", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9702d086-36d2-47b5-b8e8-d60dc4ad4410
*
* ==> storage-provisioner [80150afc5b0b] <==
* I0123 03:30:36.346999 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0123 03:30:36.365223 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0123 03:30:36.365421 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0123 03:30:53.847710 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0123 03:30:53.848028 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6a06792-96c4-4156-b40f-1c0bc270813a", APIVersion:"v1", ResourceVersion:"552", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-001800_1dc2a4fe-e116-42f4-9784-5fccdf36e045 became leader
I0123 03:30:53.848139 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-001800_1dc2a4fe-e116-42f4-9784-5fccdf36e045!
I0123 03:30:53.949312 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-001800_1dc2a4fe-e116-42f4-9784-5fccdf36e045!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-001800 -n functional-001800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-001800 -n functional-001800: (1.5625846s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-001800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmd (2101.03s)