=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run: kubectl --context functional-102159 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run: kubectl --context functional-102159 expose deployment hello-node --type=NodePort --port=8080
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-m9lg9" [80b0312d-0143-4562-a5aa-6101d62dda34] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-m9lg9" [80b0312d-0143-4562-a5aa-6101d62dda34] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 28.0936468s
functional_test.go:1449: (dbg) Run: out/minikube-windows-amd64.exe -p functional-102159 service list
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 service list: (2.0218423s)
functional_test.go:1463: (dbg) Run: out/minikube-windows-amd64.exe -p functional-102159 service --namespace=default --https --url hello-node
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-102159 service --namespace=default --https --url hello-node: exit status 1 (35m26.1854961s)
-- stdout --
https://127.0.0.1:62653
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-102159 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run: kubectl --context functional-102159 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name: hello-node-5fcdfb5cc4-m9lg9
Namespace: default
Priority: 0
Node: functional-102159/192.168.49.2
Start Time: Sat, 14 Jan 2023 10:26:02 +0000
Labels: app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID: docker://126fb125f52e4640dc9a13d87a2f5c93d62a67a35b62cd3539c0c64adafd6778
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Sat, 14 Jan 2023 10:26:24 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w4s7s (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-w4s7s:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-5fcdfb5cc4-m9lg9 to functional-102159
Normal Pulling 35m kubelet, functional-102159 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 35m kubelet, functional-102159 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 19.3431569s
Normal Created 35m kubelet, functional-102159 Created container echoserver
Normal Started 35m kubelet, functional-102159 Started container echoserver
Name: hello-node-connect-6458c8fb6f-5bzgt
Namespace: default
Priority: 0
Node: functional-102159/192.168.49.2
Start Time: Sat, 14 Jan 2023 10:26:42 +0000
Labels: app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID: docker://d5073e9565a3df99b65672acfc14d9c0f95dda871b5dcfd3cbcdd386982c4b3c
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Sat, 14 Jan 2023 10:26:44 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k9jn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-9k9jn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-6458c8fb6f-5bzgt to functional-102159
Normal Pulled 35m kubelet, functional-102159 Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal Created 35m kubelet, functional-102159 Created container echoserver
Normal Started 35m kubelet, functional-102159 Started container echoserver
functional_test.go:1412: (dbg) Run: kubectl --context functional-102159 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run: kubectl --context functional-102159 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.103.134.250
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32206/TCP
Endpoints: 172.17.0.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-102159
helpers_test.go:235: (dbg) docker inspect functional-102159:
-- stdout --
[
{
"Id": "0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d",
"Created": "2023-01-14T10:22:37.0109251Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 27855,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-01-14T10:22:37.953171Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
"ResolvConfPath": "/var/lib/docker/containers/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d/hostname",
"HostsPath": "/var/lib/docker/containers/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d/hosts",
"LogPath": "/var/lib/docker/containers/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d-json.log",
"Name": "/functional-102159",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-102159:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-102159",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/74c22ead183e5550a8305f10860d3bca44643619148339b5094dc0d728401534-init/diff:/var/lib/docker/overlay2/0319fc5680615c1d80ed1d2dd4ec2e28e4900e96dc89cfb3186ce0daa2f0c798/diff:/var/lib/docker/overlay2/641c0be8e9dcba148f49ccaf3907690f03e68a0453054a018cc2d8d554ceb228/diff:/var/lib/docker/overlay2/f192bff258f753c4dd8a9584547f6a30e0c3ff7a8ee0308be3e0e487487f1745/diff:/var/lib/docker/overlay2/9263b14da091cc57f0e2b54392ed379e6a1aac266f9a63e30675e9c43e7588a0/diff:/var/lib/docker/overlay2/aecae5087861fb71f7cf3b407e0830d3a2f5427641436c01100110504ebcd43b/diff:/var/lib/docker/overlay2/1cceb96d81956493568b9f4730cafe1ed39e522a1f60ed52ed7d0e4eb8abce3a/diff:/var/lib/docker/overlay2/ec4719e9a881a9f037bdfbede7568a0aba53797f4d2ef6c8a02b498f4495698a/diff:/var/lib/docker/overlay2/f75fa843f23c738b5f0bd4ffbb5174ea490a1cd333873e37a6ed5ad14d38a9d1/diff:/var/lib/docker/overlay2/75422af761b92f586be003f28ac3a39f970908caa891178940eba46ec4806383/diff:/var/lib/docker/overlay2/316756
87e92fb1275a51786e88c59966ec6bc403cf768ab98561f8d3926ee5d8/diff:/var/lib/docker/overlay2/2932be832d371e1053d62c6d513f4d77c43d23463aaaf0d98aee67b3f7602540/diff:/var/lib/docker/overlay2/2c44356fce90aac744db8a0a4035502d16fea1da630ce147e088e157dceef923/diff:/var/lib/docker/overlay2/9b03cc19f9697ba0d664ef1eb34ddfc1e549e9031135b5b8b1dafa7454d399cc/diff:/var/lib/docker/overlay2/baab503d3a26d91d289e28c73212f5271a16469af630e973714bf9b7acc2a206/diff:/var/lib/docker/overlay2/d0e4408ba7d017cf5e4d2739628754664bdea24b7186b7efc267ec03a54d2283/diff:/var/lib/docker/overlay2/9accf9e797118b7b88e2270d1f022551d5e9bc77097e49e8c5acca75fafbe2a9/diff:/var/lib/docker/overlay2/e05ce2bcfac188595f4eb1980cf86c681398ab65fe6bc25d8129f27a6c87b8a9/diff:/var/lib/docker/overlay2/c052f07688bdd2e85ca704f876a24ebf259bf7fe7fc122473d68e5b5f4a37b52/diff:/var/lib/docker/overlay2/950ed07c617e47c75b85bb70ea3d5b83db751b0dd05915f89163962e0966ad88/diff:/var/lib/docker/overlay2/48c61d81c7eaa0e571038e496be8e54495cffce611a2c5591fa5c698eeaab5ec/diff:/var/lib/d
ocker/overlay2/85cb428a7ca5bc60e99f20c6e6851d70c9cea3c26ac86817831d4d8bd130bb13/diff:/var/lib/docker/overlay2/9ca4a5e53e6f5a7444634c91f67e314e4c31fbf79b3af20bae436b5cddddbf83/diff:/var/lib/docker/overlay2/f9c4b034315c85af252fa61e528fb4305c5a666a1251bd5e9c0e237a869b4abd/diff:/var/lib/docker/overlay2/c5e5c4c66df1a7ddb9c86cad1c9aa940caa82844ad0e35cb02276aa0ba6d0b7f/diff:/var/lib/docker/overlay2/aaf58f33ac931eb54be4cd1919570ca1af95733dcab4b9c2e10c41131e77db49/diff:/var/lib/docker/overlay2/a7b40564a575c87ea262f224ef13fc6481638ba1a63ed7240fb4bf8926a6ed85/diff:/var/lib/docker/overlay2/c4cb52a2953465db8efcff407454278e0296ab6baa055d94c7e883851c5cf217/diff:/var/lib/docker/overlay2/c81d13d116441cca9b5166ae9674e23797741062dadc263c582f2367b998983a/diff:/var/lib/docker/overlay2/4c5c0ba04aaacf397dbaee3fe647d47794f60c155c62c2ab195d5354b2205f48/diff:/var/lib/docker/overlay2/550368a0b173c5abbc3d55283b835ff0133b2b59a7c034331f8df09665618a27/diff:/var/lib/docker/overlay2/76b7a7ae6ccf1e1f3f7dc7eb334096c3b8033b536b6a9259fb5030a4440
04eba/diff:/var/lib/docker/overlay2/d2b1f30f2546d9b7f03bc4e219d94725d78d8c982c20f8f0bafdbfda5c1ae8bf/diff:/var/lib/docker/overlay2/85233a1e3baa32dd2921cf7bedeba0cf4b3da93a30b961e0087d1bdc1a4c1bc3/diff:/var/lib/docker/overlay2/cb8ef3b71a3380e31859eddfd48bb662ea2b36c8c7e7da11114980e0ba7da149/diff:/var/lib/docker/overlay2/b7f9ef634f5e76c7886184f9b63923a29d2e0b320850bfd88d27c3169035e9f9/diff:/var/lib/docker/overlay2/857989e997e2c25ce2f08f05b398edd7de66bb41d9ab58299a913342d0666fe1/diff:/var/lib/docker/overlay2/be6259f9536637946901e9ae95da97a3546ff13afb5e7c6c53122c510189a587/diff:/var/lib/docker/overlay2/9af98ef1175ee709936e3f747a75f96127fef4beb6de6ded048be349cd2beeae/diff:/var/lib/docker/overlay2/380334ebb180d7dbdf9f3daf712f6550a5275eba9cffdac50822a52a18ae9d20/diff:/var/lib/docker/overlay2/4781a5f04440c58de00439020d596e31349356aa3a7df83e3dccee9d11c40a37/diff:/var/lib/docker/overlay2/b50427ad8910da25774c4a723d44972dec4cb53ea795140cda716993f11f1fbe/diff:/var/lib/docker/overlay2/314c762565a5db119530283731d2d311f49334
fd16d0fe1ca399b41cae25e54e/diff:/var/lib/docker/overlay2/7d010f8f6973e80c495d5a95dc1bbe689146ea762229f6697047402ca4f12c0e/diff:/var/lib/docker/overlay2/b586bd70ddd07df9686091a4425ae3d15c8b5879df73e7751a868397558650b9/diff",
"MergedDir": "/var/lib/docker/overlay2/74c22ead183e5550a8305f10860d3bca44643619148339b5094dc0d728401534/merged",
"UpperDir": "/var/lib/docker/overlay2/74c22ead183e5550a8305f10860d3bca44643619148339b5094dc0d728401534/diff",
"WorkDir": "/var/lib/docker/overlay2/74c22ead183e5550a8305f10860d3bca44643619148339b5094dc0d728401534/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-102159",
"Source": "/var/lib/docker/volumes/functional-102159/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-102159",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-102159",
"name.minikube.sigs.k8s.io": "functional-102159",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7086e379e867baa9b85cd24d4250a4458f5554f3849396c509ff6ee157d727eb",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62389"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62390"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62391"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62392"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "62393"
}
]
},
"SandboxKey": "/var/run/docker/netns/7086e379e867",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-102159": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"0013e6d15176",
"functional-102159"
],
"NetworkID": "5dc832c33b0655d4aa36f8e4707672dba42fa1fc4e952757987578d3cf3b4030",
"EndpointID": "282a8a85b11c4742f8b3b26abfc1524b4bea446b0fd48533127346f538bc1544",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-102159 -n functional-102159
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-102159 -n functional-102159: (1.5368487s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-102159 logs -n 25
E0114 11:02:05.201448 9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 logs -n 25: (3.4435296s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
| ssh | functional-102159 ssh sudo cat | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | /usr/share/ca-certificates/9968.pem | | | | | |
| ssh | functional-102159 ssh sudo cat | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | /etc/ssl/certs/51391683.0 | | | | | |
| ssh | functional-102159 ssh sudo cat | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | /etc/ssl/certs/99682.pem | | | | | |
| ssh | functional-102159 ssh sudo cat | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | /usr/share/ca-certificates/99682.pem | | | | | |
| image | functional-102159 image ls | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| ssh | functional-102159 ssh sudo cat | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| image | functional-102159 image save | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | gcr.io/google-containers/addon-resizer:functional-102159 | | | | | |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| docker-env | functional-102159 docker-env | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| image | functional-102159 image rm | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | gcr.io/google-containers/addon-resizer:functional-102159 | | | | | |
| docker-env | functional-102159 docker-env | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| image | functional-102159 image ls | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| image | functional-102159 image load | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| ssh | functional-102159 ssh sudo cat | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | /etc/test/nested/copy/9968/hosts | | | | | |
| image | functional-102159 image ls | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| image | functional-102159 image save --daemon | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | gcr.io/google-containers/addon-resizer:functional-102159 | | | | | |
| update-context | functional-102159 | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-102159 | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-102159 | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-102159 image ls | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
| | --format table | | | | | |
| image | functional-102159 image ls | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
| | --format short | | | | | |
| image | functional-102159 image ls | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
| | --format yaml | | | | | |
| ssh | functional-102159 ssh pgrep | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | |
| | buildkitd | | | | | |
| image | functional-102159 image build -t | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
| | localhost/my-image:functional-102159 | | | | | |
| | testdata\build | | | | | |
| image | functional-102159 image ls | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
| image | functional-102159 image ls | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
| | --format json | | | | | |
|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/14 10:26:13
Running on machine: minikube2
Binary: Built with gc go1.19.3 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0114 10:26:12.989189 6696 out.go:296] Setting OutFile to fd 1004 ...
I0114 10:26:13.071023 6696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:26:13.071023 6696 out.go:309] Setting ErrFile to fd 768...
I0114 10:26:13.071023 6696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:26:13.090024 6696 out.go:303] Setting JSON to false
I0114 10:26:13.093023 6696 start.go:125] hostinfo: {"hostname":"minikube2","uptime":3584,"bootTime":1673688389,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0114 10:26:13.094020 6696 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0114 10:26:13.098027 6696 out.go:177] * [functional-102159] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
I0114 10:26:13.102042 6696 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0114 10:26:13.102042 6696 notify.go:220] Checking for updates...
I0114 10:26:13.107020 6696 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0114 10:26:13.110036 6696 out.go:177] - MINIKUBE_LOCATION=15642
I0114 10:26:13.119017 6696 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 10:26:13.123023 6696 config.go:180] Loaded profile config "functional-102159": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:26:13.124026 6696 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 10:26:13.452022 6696 docker.go:138] docker version: linux-20.10.21
I0114 10:26:13.461029 6696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0114 10:26:14.192106 6696 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2023-01-14 10:26:13.6571601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I0114 10:26:14.206736 6696 out.go:177] * Using the docker driver based on existing profile
I0114 10:26:14.209791 6696 start.go:294] selected driver: docker
I0114 10:26:14.209791 6696 start.go:838] validating driver "docker" against &{Name:functional-102159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-102159 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:26:14.210359 6696 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 10:26:14.233560 6696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0114 10:26:14.890863 6696 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2023-01-14 10:26:14.391654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plugi
ns\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I0114 10:26:14.947799 6696 cni.go:95] Creating CNI manager for ""
I0114 10:26:14.947799 6696 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 10:26:14.947799 6696 start_flags.go:319] config:
{Name:functional-102159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-102159 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false sto
rage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:26:14.951494 6696 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Sat 2023-01-14 10:22:38 UTC, end at Sat 2023-01-14 11:02:04 UTC. --
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.188433700Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 406ae7e1542dc573185d0fc18fa6d59d30fa990d6c19a92528a72978ec171c50 9e9cf1bf2b70299fb2844ae8d73c9df55985a00c6ffd8322cdf4e7f2201576c1], retrying...."
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.400923000Z" level=info msg="Removing stale sandbox 48fa11f5373d28abf056f3088c7d66693324d929b7e1ae155660fab913de7932 (86fb4aa5cb191be8f92154587714113827558328ec119e493813962887543447)"
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.408667300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 38b6769732b9eb35fe8627f4afe12eaf1585c21048207cda3f6313022b5a9dd8 436b52e1f8d1f5eb64cef9000b2bf3dba28b046bead179819a8c6d2e21da97df], retrying...."
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.501604500Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.610251500Z" level=info msg="Loading containers: done."
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.683396100Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.683557700Z" level=info msg="Daemon has completed initialization"
Jan 14 10:25:17 functional-102159 systemd[1]: Started Docker Application Container Engine.
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.735825000Z" level=info msg="API listen on [::]:2376"
Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.751020900Z" level=info msg="API listen on /var/run/docker.sock"
Jan 14 10:25:18 functional-102159 dockerd[8509]: time="2023-01-14T10:25:18.099900100Z" level=error msg="Failed to compute size of container rootfs cf7dfe43e73cf169850b52c4c6ea070bcfe118ee3cb98b7da10067e5186c3de0: mount does not exist"
Jan 14 10:25:18 functional-102159 dockerd[8509]: time="2023-01-14T10:25:18.207068700Z" level=error msg="Failed to compute size of container rootfs dea50957430ba664cf8891d7d9acec84ade810c476ea0072f1965b5abe699612: mount does not exist"
Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.309974100Z" level=info msg="ignoring event" container=4798303d77b584f6a204d194ddf0d4190b7761a8e123edc43076c7e65e2dffff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.398462800Z" level=info msg="ignoring event" container=50bfd183fcb7b51abb9f4e0678d33f4570f388e619be6cbe6e90a9ae17f4f8a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.398523500Z" level=info msg="ignoring event" container=0354baff29b5d47625c000016cb90034f6814d353df74a75798b139be72d5aa4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.505885000Z" level=info msg="ignoring event" container=efa78421167e425954ce6b9f859c0b64273c31e5a550fe71c7b01181795849f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.505965500Z" level=info msg="ignoring event" container=ef1da7f1f0c7d1a635bddd3a7f98811b5b47e0e0a9d797cf61c29382b88a0eb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.506002400Z" level=info msg="ignoring event" container=71a71b9bf4584e969bbdac5f2ba18bd91477d21ca8f5a6897a36b005ee9a1261 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.506023400Z" level=info msg="ignoring event" container=01aa7f5164cd28eeb5ff68f0de296fb00e2efaf8fdf04e3b0eaf65500172220a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:25:26 functional-102159 dockerd[8509]: time="2023-01-14T10:25:26.280522000Z" level=info msg="ignoring event" container=df005816fd7c743b9a1e88f82564d0c993c681188c62101090955d5f404bd475 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:25:40 functional-102159 dockerd[8509]: time="2023-01-14T10:25:40.920535900Z" level=info msg="ignoring event" container=d01dce30be6634fda3259f769b014189a59c9dccade3aa73e1ec87afda159f30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:27:06 functional-102159 dockerd[8509]: time="2023-01-14T10:27:06.298124800Z" level=info msg="ignoring event" container=ba990b9f959a302e47b3c89a85b515c85be33c58b9262db28ed033f600d3db8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:27:07 functional-102159 dockerd[8509]: time="2023-01-14T10:27:07.199036500Z" level=info msg="ignoring event" container=47c5551587d5d8038cfe55a9eed971866370d05f84e5438dc4f9da97623cb204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:28:16 functional-102159 dockerd[8509]: time="2023-01-14T10:28:16.307402400Z" level=info msg="ignoring event" container=afa9aaf7d7fdfb6820c25b2882344897e3ed6ae3c01c3c964780f8c32aaa36e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:28:17 functional-102159 dockerd[8509]: time="2023-01-14T10:28:17.730745600Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0338cc1a4b98d mysql@sha256:6306f106a056e24b3a2582a59a4c84cd199907f826eff27df36406f227cd9a7d 33 minutes ago Running mysql 0 eda543c2273b1
a7830dcb341e1 nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e 34 minutes ago Running myfrontend 0 900235733b143
d5073e9565a3d 82e4c8a736a4f 35 minutes ago Running echoserver 0 0cfba3becc101
14697928ea507 nginx@sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6 35 minutes ago Running nginx 0 faea8ad195ef0
126fb125f52e4 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 35 minutes ago Running echoserver 0 2d062826068d2
44589eb98582b 6e38f40d628db 36 minutes ago Running storage-provisioner 4 59aa08e330a75
df5aed183a9f7 5185b96f0becf 36 minutes ago Running coredns 3 1ff7efe5ca6cf
6f8558a3b16a7 0346dbd74bcb9 36 minutes ago Running kube-apiserver 0 d5f80083e8cac
e2a0f337c0b52 6039992312758 36 minutes ago Running kube-controller-manager 3 c0e77e6470c9f
775ae3ed4b9d9 a8a176a5d5d69 36 minutes ago Running etcd 3 46d5c84a5f5b1
d415ff2a68b71 6d23ec0e8b87e 36 minutes ago Running kube-scheduler 3 cc8eb135e4c65
4ca61b0fe8ea6 beaaf00edd38a 36 minutes ago Running kube-proxy 3 c79d232c187a6
a6a24bfc13562 6e38f40d628db 37 minutes ago Exited storage-provisioner 3 214fa47545901
3570b5740a849 5185b96f0becf 37 minutes ago Exited coredns 2 47d4814b57604
5623e194917fb 6039992312758 37 minutes ago Exited kube-controller-manager 2 952ac0f27f986
de11e4aa3fdd2 a8a176a5d5d69 37 minutes ago Exited etcd 2 70411bd9f595d
90a933a59d269 beaaf00edd38a 37 minutes ago Exited kube-proxy 2 d0d5611bfd093
e5277841b152d 6d23ec0e8b87e 37 minutes ago Exited kube-scheduler 2 07813b99c43a1
*
* ==> coredns [3570b5740a84] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [df5aed183a9f] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: functional-102159
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-102159
kubernetes.io/os=linux
minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
minikube.k8s.io/name=functional-102159
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_14T10_23_15_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 14 Jan 2023 10:23:10 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-102159
AcquireTime: <unset>
RenewTime: Sat, 14 Jan 2023 11:01:54 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 14 Jan 2023 10:59:44 +0000 Sat, 14 Jan 2023 10:23:09 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 14 Jan 2023 10:59:44 +0000 Sat, 14 Jan 2023 10:23:09 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 14 Jan 2023 10:59:44 +0000 Sat, 14 Jan 2023 10:23:09 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 14 Jan 2023 10:59:44 +0000 Sat, 14 Jan 2023 10:23:26 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-102159
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: dc065f8e2d1f42529ccfe18f8b887c8c
System UUID: dc065f8e2d1f42529ccfe18f8b887c8c
Boot ID: abbf2dbe-7291-44a4-8406-1487b6f3b20a
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.21
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-5fcdfb5cc4-m9lg9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36m
default hello-node-connect-6458c8fb6f-5bzgt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default mysql-596b7fcdbf-r5qcr 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 34m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
kube-system coredns-565d847f94-b8m5m 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-102159 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-102159 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36m
kube-system kube-controller-manager-functional-102159 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-82zd2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-102159 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 38m kube-proxy
Normal Starting 36m kube-proxy
Normal Starting 37m kube-proxy
Normal NodeHasSufficientMemory 39m (x7 over 39m) kubelet Node functional-102159 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x6 over 39m) kubelet Node functional-102159 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 39m (x6 over 39m) kubelet Node functional-102159 status is now: NodeHasSufficientPID
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 38m kubelet Node functional-102159 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-102159 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-102159 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-102159 status is now: NodeReady
Normal RegisteredNode 38m node-controller Node functional-102159 event: Registered Node functional-102159 in Controller
Normal RegisteredNode 37m node-controller Node functional-102159 event: Registered Node functional-102159 in Controller
Normal Starting 36m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 36m (x8 over 36m) kubelet Node functional-102159 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 36m (x8 over 36m) kubelet Node functional-102159 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 36m (x7 over 36m) kubelet Node functional-102159 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 36m node-controller Node functional-102159 event: Registered Node functional-102159 in Controller
*
* ==> dmesg <==
* [Jan14 10:37] WSL2: Performing memory compaction.
[Jan14 10:38] WSL2: Performing memory compaction.
[Jan14 10:39] WSL2: Performing memory compaction.
[Jan14 10:40] WSL2: Performing memory compaction.
[Jan14 10:41] WSL2: Performing memory compaction.
[Jan14 10:42] WSL2: Performing memory compaction.
[Jan14 10:43] WSL2: Performing memory compaction.
[Jan14 10:44] WSL2: Performing memory compaction.
[Jan14 10:45] WSL2: Performing memory compaction.
[Jan14 10:46] WSL2: Performing memory compaction.
[Jan14 10:47] WSL2: Performing memory compaction.
[Jan14 10:48] WSL2: Performing memory compaction.
[Jan14 10:49] WSL2: Performing memory compaction.
[Jan14 10:50] WSL2: Performing memory compaction.
[Jan14 10:51] WSL2: Performing memory compaction.
[Jan14 10:52] WSL2: Performing memory compaction.
[Jan14 10:53] WSL2: Performing memory compaction.
[Jan14 10:54] WSL2: Performing memory compaction.
[Jan14 10:55] WSL2: Performing memory compaction.
[Jan14 10:56] WSL2: Performing memory compaction.
[Jan14 10:57] WSL2: Performing memory compaction.
[Jan14 10:58] WSL2: Performing memory compaction.
[Jan14 10:59] WSL2: Performing memory compaction.
[Jan14 11:00] WSL2: Performing memory compaction.
[Jan14 11:01] WSL2: Performing memory compaction.
*
* ==> etcd [775ae3ed4b9d] <==
* {"level":"info","ts":"2023-01-14T10:28:52.237Z","caller":"traceutil/trace.go:171","msg":"trace[367315292] linearizableReadLoop","detail":"{readStateIndex:962; appliedIndex:962; }","duration":"955.6113ms","start":"2023-01-14T10:28:51.281Z","end":"2023-01-14T10:28:52.237Z","steps":["trace[367315292] 'read index received' (duration: 955.6015ms)","trace[367315292] 'applied index is now lower than readState.Index' (duration: 6.6µs)"],"step_count":2}
{"level":"warn","ts":"2023-01-14T10:28:52.237Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"820.9913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13520"}
{"level":"warn","ts":"2023-01-14T10:28:52.237Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"956.0255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-01-14T10:28:52.238Z","caller":"traceutil/trace.go:171","msg":"trace[123146674] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:871; }","duration":"821.2737ms","start":"2023-01-14T10:28:51.416Z","end":"2023-01-14T10:28:52.238Z","steps":["trace[123146674] 'agreement among raft nodes before linearized reading' (duration: 820.9051ms)"],"step_count":1}
{"level":"info","ts":"2023-01-14T10:28:52.238Z","caller":"traceutil/trace.go:171","msg":"trace[967262301] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:871; }","duration":"956.1071ms","start":"2023-01-14T10:28:51.281Z","end":"2023-01-14T10:28:52.238Z","steps":["trace[967262301] 'agreement among raft nodes before linearized reading' (duration: 955.9817ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:28:51.416Z","time spent":"821.3683ms","remote":"127.0.0.1:46316","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13544,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.4476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
{"level":"info","ts":"2023-01-14T10:28:52.238Z","caller":"traceutil/trace.go:171","msg":"trace[2073246068] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:871; }","duration":"121.4898ms","start":"2023-01-14T10:28:52.116Z","end":"2023-01-14T10:28:52.238Z","steps":["trace[2073246068] 'agreement among raft nodes before linearized reading' (duration: 121.402ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"697.5543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-01-14T10:28:52.238Z","caller":"traceutil/trace.go:171","msg":"trace[419299038] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"697.6117ms","start":"2023-01-14T10:28:51.540Z","end":"2023-01-14T10:28:52.238Z","steps":["trace[419299038] 'agreement among raft nodes before linearized reading' (duration: 697.5111ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:28:51.540Z","time spent":"697.6939ms","remote":"127.0.0.1:46328","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:28:51.281Z","time spent":"956.1915ms","remote":"127.0.0.1:46376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
{"level":"info","ts":"2023-01-14T10:35:34.937Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":953}
{"level":"info","ts":"2023-01-14T10:35:34.939Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":953,"took":"1.3255ms"}
{"level":"info","ts":"2023-01-14T10:40:34.972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1164}
{"level":"info","ts":"2023-01-14T10:40:34.973Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1164,"took":"540.3µs"}
{"level":"info","ts":"2023-01-14T10:44:12.535Z","caller":"traceutil/trace.go:171","msg":"trace[944493598] transaction","detail":"{read_only:false; response_revision:1525; number_of_response:1; }","duration":"119.7332ms","start":"2023-01-14T10:44:12.415Z","end":"2023-01-14T10:44:12.535Z","steps":["trace[944493598] 'process raft request' (duration: 94.5731ms)","trace[944493598] 'compare' (duration: 24.7699ms)"],"step_count":2}
{"level":"info","ts":"2023-01-14T10:45:34.989Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1374}
{"level":"info","ts":"2023-01-14T10:45:34.990Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1374,"took":"1.1676ms"}
{"level":"info","ts":"2023-01-14T10:50:35.013Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1585}
{"level":"info","ts":"2023-01-14T10:50:35.014Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1585,"took":"1.0628ms"}
{"level":"info","ts":"2023-01-14T10:55:35.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1795}
{"level":"info","ts":"2023-01-14T10:55:35.036Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1795,"took":"693.8µs"}
{"level":"info","ts":"2023-01-14T11:00:35.054Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2005}
{"level":"info","ts":"2023-01-14T11:00:35.056Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2005,"took":"736.1µs"}
*
* ==> etcd [de11e4aa3fdd] <==
* {"level":"info","ts":"2023-01-14T10:24:12.609Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:24:12.610Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-14T10:24:12.610Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-14T10:24:12.617Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2023-01-14T10:24:12.617Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2023-01-14T10:24:20.699Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.58ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-01-14T10:24:20.700Z","caller":"traceutil/trace.go:171","msg":"trace[1620044353] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:417; }","duration":"104.7522ms","start":"2023-01-14T10:24:20.595Z","end":"2023-01-14T10:24:20.700Z","steps":["trace[1620044353] 'range keys from in-memory index tree' (duration: 104.397ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:24:20.700Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.5001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
{"level":"info","ts":"2023-01-14T10:24:20.700Z","caller":"traceutil/trace.go:171","msg":"trace[190619426] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:417; }","duration":"104.5681ms","start":"2023-01-14T10:24:20.595Z","end":"2023-01-14T10:24:20.700Z","steps":["trace[190619426] 'range keys from in-memory index tree' (duration: 104.3749ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:24:20.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.6737ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128018417634151038 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-functional-102159.173a25d95b8cab6c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-functional-102159.173a25d95b8cab6c\" value_size:714 lease:8128018417634151025 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2023-01-14T10:24:20.702Z","caller":"traceutil/trace.go:171","msg":"trace[1394572518] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"104.6355ms","start":"2023-01-14T10:24:20.598Z","end":"2023-01-14T10:24:20.702Z","steps":["trace[1394572518] 'process raft request' (duration: 104.5468ms)"],"step_count":1}
{"level":"info","ts":"2023-01-14T10:24:20.702Z","caller":"traceutil/trace.go:171","msg":"trace[1625638255] linearizableReadLoop","detail":"{readStateIndex:440; appliedIndex:439; }","duration":"105.9784ms","start":"2023-01-14T10:24:20.596Z","end":"2023-01-14T10:24:20.702Z","steps":["trace[1625638255] 'read index received' (duration: 99.1254ms)","trace[1625638255] 'applied index is now lower than readState.Index' (duration: 6.8478ms)"],"step_count":2}
{"level":"info","ts":"2023-01-14T10:24:20.702Z","caller":"traceutil/trace.go:171","msg":"trace[3976706] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"106.4072ms","start":"2023-01-14T10:24:20.596Z","end":"2023-01-14T10:24:20.702Z","steps":["trace[3976706] 'compare' (duration: 102.7602ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:24:20.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.3222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-public\" ","response":"range_response_count:1 size:351"}
{"level":"info","ts":"2023-01-14T10:24:20.703Z","caller":"traceutil/trace.go:171","msg":"trace[2012943458] range","detail":"{range_begin:/registry/namespaces/kube-public; range_end:; response_count:1; response_revision:419; }","duration":"106.4567ms","start":"2023-01-14T10:24:20.596Z","end":"2023-01-14T10:24:20.702Z","steps":["trace[2012943458] 'agreement among raft nodes before linearized reading' (duration: 106.2969ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:24:20.708Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.1778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" ","response":"range_response_count:2 size:1908"}
{"level":"info","ts":"2023-01-14T10:24:20.708Z","caller":"traceutil/trace.go:171","msg":"trace[1930976732] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:419; }","duration":"111.3752ms","start":"2023-01-14T10:24:20.597Z","end":"2023-01-14T10:24:20.708Z","steps":["trace[1930976732] 'agreement among raft nodes before linearized reading' (duration: 111.1419ms)"],"step_count":1}
{"level":"info","ts":"2023-01-14T10:25:04.994Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-01-14T10:25:04.995Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-102159","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2023/01/14 10:25:04 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2023/01/14 10:25:05 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2023-01-14T10:25:05.095Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2023-01-14T10:25:05.107Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2023-01-14T10:25:05.109Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2023-01-14T10:25:05.109Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-102159","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> kernel <==
* 11:02:04 up 56 min, 0 users, load average: 0.39, 0.47, 0.56
Linux functional-102159 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [6f8558a3b16a] <==
* I0114 10:26:02.506569 1 controller.go:616] quota admission added evaluator for: replicasets.apps
I0114 10:26:02.802701 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.103.134.250]
I0114 10:26:02.915129 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0114 10:26:10.222011 1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.101.250.152]
I0114 10:26:24.028970 1 trace.go:205] Trace[1417863744]: "Get" url:/api/v1/namespaces/default/persistentvolumeclaims/myclaim,user-agent:kubectl.exe/v1.18.2 (windows/amd64) kubernetes/52c56ce,audit-id:125fd9ef-a25e-4a8d-93a4-4362e1c676e4,client:192.168.49.1,accept:application/json,protocol:HTTP/2.0 (14-Jan-2023 10:26:23.439) (total time: 589ms):
Trace[1417863744]: ---"About to write a response" 589ms (10:26:24.028)
Trace[1417863744]: [589.2883ms] [589.2883ms] END
I0114 10:26:42.513526 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.111.220.79]
I0114 10:27:40.224740 1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.111.101.114]
I0114 10:28:13.696602 1 trace.go:205] Trace[189169]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints (14-Jan-2023 10:28:12.198) (total time: 1498ms):
Trace[189169]: ---"Txn call finished" err:<nil> 1493ms (10:28:13.696)
Trace[189169]: [1.4980267s] [1.4980267s] END
I0114 10:28:13.697507 1 trace.go:205] Trace[1760240503]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:8448ab6b-6d05-4550-beab-2c294c8e0291,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Jan-2023 10:28:12.913) (total time: 783ms):
Trace[1760240503]: ---"About to write a response" 783ms (10:28:13.697)
Trace[1760240503]: [783.5553ms] [783.5553ms] END
I0114 10:28:13.698861 1 trace.go:205] Trace[1912017713]: "List(recursive=true) etcd3" audit-id:e85cf58c-64c7-476c-a742-a387cf0f41e8,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Jan-2023 10:28:12.408) (total time: 1289ms):
Trace[1912017713]: [1.2899448s] [1.2899448s] END
I0114 10:28:13.699907 1 trace.go:205] Trace[437431457]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:e85cf58c-64c7-476c-a742-a387cf0f41e8,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (14-Jan-2023 10:28:12.408) (total time: 1291ms):
Trace[437431457]: ---"Listing from storage done" 1290ms (10:28:13.698)
Trace[437431457]: [1.2910478s] [1.2910478s] END
I0114 10:28:52.239713 1 trace.go:205] Trace[1047385238]: "List(recursive=true) etcd3" audit-id:06199082-3399-4c8e-a4aa-8002fc869ffa,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Jan-2023 10:28:51.415) (total time: 824ms):
Trace[1047385238]: [824.2437ms] [824.2437ms] END
I0114 10:28:52.240419 1 trace.go:205] Trace[1071649882]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:06199082-3399-4c8e-a4aa-8002fc869ffa,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (14-Jan-2023 10:28:51.415) (total time: 825ms):
Trace[1071649882]: ---"Listing from storage done" 824ms (10:28:52.239)
Trace[1071649882]: [825.0645ms] [825.0645ms] END
*
* ==> kube-controller-manager [5623e194917f] <==
* I0114 10:24:34.600319 1 shared_informer.go:262] Caches are synced for disruption
I0114 10:24:34.601811 1 shared_informer.go:262] Caches are synced for PVC protection
I0114 10:24:34.602550 1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0114 10:24:34.604060 1 shared_informer.go:262] Caches are synced for expand
I0114 10:24:34.604162 1 shared_informer.go:262] Caches are synced for service account
I0114 10:24:34.694518 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0114 10:24:34.694670 1 shared_informer.go:262] Caches are synced for stateful set
I0114 10:24:34.694722 1 shared_informer.go:262] Caches are synced for job
I0114 10:24:34.694750 1 shared_informer.go:262] Caches are synced for ReplicationController
I0114 10:24:34.695401 1 shared_informer.go:262] Caches are synced for GC
I0114 10:24:34.702045 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0114 10:24:34.708159 1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0114 10:24:34.795118 1 shared_informer.go:262] Caches are synced for taint
I0114 10:24:34.795138 1 shared_informer.go:262] Caches are synced for resource quota
I0114 10:24:34.795206 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
I0114 10:24:34.795242 1 taint_manager.go:204] "Starting NoExecuteTaintManager"
I0114 10:24:34.795320 1 taint_manager.go:209] "Sending events to api server"
I0114 10:24:34.795515 1 event.go:294] "Event occurred" object="functional-102159" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-102159 event: Registered Node functional-102159 in Controller"
W0114 10:24:34.795329 1 node_lifecycle_controller.go:1058] Missing timestamp for Node functional-102159. Assuming now as a timestamp.
I0114 10:24:34.795627 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0114 10:24:34.795209 1 shared_informer.go:262] Caches are synced for daemon sets
I0114 10:24:34.795434 1 shared_informer.go:262] Caches are synced for resource quota
I0114 10:24:35.025643 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 10:24:35.025750 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0114 10:24:35.109009 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-controller-manager [e2a0f337c0b5] <==
* I0114 10:25:54.895552 1 shared_informer.go:262] Caches are synced for namespace
I0114 10:25:54.898218 1 shared_informer.go:262] Caches are synced for taint
I0114 10:25:54.898469 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
I0114 10:25:54.898532 1 taint_manager.go:204] "Starting NoExecuteTaintManager"
W0114 10:25:54.898568 1 node_lifecycle_controller.go:1058] Missing timestamp for Node functional-102159. Assuming now as a timestamp.
I0114 10:25:54.898597 1 taint_manager.go:209] "Sending events to api server"
I0114 10:25:54.898630 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0114 10:25:54.898969 1 event.go:294] "Event occurred" object="functional-102159" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-102159 event: Registered Node functional-102159 in Controller"
I0114 10:25:54.899657 1 shared_informer.go:262] Caches are synced for expand
I0114 10:25:54.904720 1 shared_informer.go:262] Caches are synced for resource quota
I0114 10:25:54.904856 1 shared_informer.go:262] Caches are synced for resource quota
I0114 10:25:54.917344 1 shared_informer.go:262] Caches are synced for PV protection
I0114 10:25:54.921720 1 shared_informer.go:262] Caches are synced for persistent volume
I0114 10:25:54.997577 1 shared_informer.go:262] Caches are synced for attach detach
I0114 10:25:55.313016 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 10:25:55.313083 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 10:25:55.313273 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0114 10:26:02.511251 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
I0114 10:26:02.597778 1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-m9lg9"
I0114 10:26:21.398138 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0114 10:26:21.398303 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0114 10:26:41.996850 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
I0114 10:26:42.021680 1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-5bzgt"
I0114 10:27:40.324409 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
I0114 10:27:40.429763 1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-r5qcr"
*
* ==> kube-proxy [4ca61b0fe8ea] <==
* I0114 10:25:25.708632 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0114 10:25:25.711638 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0114 10:25:27.296379 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-102159": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:38432->192.168.49.2:8441: read: connection reset by peer
E0114 10:25:28.469334 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-102159": dial tcp 192.168.49.2:8441: connect: connection refused
E0114 10:25:30.612361 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-102159": dial tcp 192.168.49.2:8441: connect: connection refused
I0114 10:25:39.710342 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0114 10:25:39.710422 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0114 10:25:39.710496 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0114 10:25:39.931333 1 server_others.go:206] "Using iptables Proxier"
I0114 10:25:39.931484 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0114 10:25:39.931498 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0114 10:25:39.931516 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0114 10:25:39.931550 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:25:39.931955 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:25:39.932895 1 server.go:661] "Version info" version="v1.25.3"
I0114 10:25:39.933002 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:25:39.934119 1 config.go:226] "Starting endpoint slice config controller"
I0114 10:25:39.934424 1 config.go:317] "Starting service config controller"
I0114 10:25:39.934429 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0114 10:25:39.934193 1 config.go:444] "Starting node config controller"
I0114 10:25:39.934494 1 shared_informer.go:255] Waiting for caches to sync for node config
I0114 10:25:39.934478 1 shared_informer.go:255] Waiting for caches to sync for service config
I0114 10:25:40.095377 1 shared_informer.go:262] Caches are synced for service config
I0114 10:25:40.095550 1 shared_informer.go:262] Caches are synced for node config
I0114 10:25:40.095580 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-proxy [90a933a59d26] <==
* I0114 10:24:11.199188 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0114 10:24:11.203035 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0114 10:24:11.206786 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0114 10:24:11.295659 1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0114 10:24:11.301318 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-102159": dial tcp 192.168.49.2:8441: connect: connection refused
I0114 10:24:20.498097 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0114 10:24:20.498257 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0114 10:24:20.498298 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0114 10:24:20.802191 1 server_others.go:206] "Using iptables Proxier"
I0114 10:24:20.802559 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0114 10:24:20.802582 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0114 10:24:20.802660 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0114 10:24:20.802753 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:24:20.803274 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:24:20.803938 1 server.go:661] "Version info" version="v1.25.3"
I0114 10:24:20.804055 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:24:20.804968 1 config.go:226] "Starting endpoint slice config controller"
I0114 10:24:20.805388 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0114 10:24:20.805037 1 config.go:444] "Starting node config controller"
I0114 10:24:20.805417 1 shared_informer.go:255] Waiting for caches to sync for node config
I0114 10:24:20.805534 1 config.go:317] "Starting service config controller"
I0114 10:24:20.805650 1 shared_informer.go:255] Waiting for caches to sync for service config
I0114 10:24:20.909848 1 shared_informer.go:262] Caches are synced for node config
I0114 10:24:20.910069 1 shared_informer.go:262] Caches are synced for service config
I0114 10:24:20.910192 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [d415ff2a68b7] <==
* I0114 10:25:35.707598 1 serving.go:348] Generated self-signed cert in-memory
W0114 10:25:39.599464 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0114 10:25:39.599615 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0114 10:25:39.599645 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0114 10:25:39.599664 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0114 10:25:39.708709 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0114 10:25:39.708912 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:25:39.711663 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0114 10:25:39.711936 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 10:25:39.711705 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0114 10:25:39.711758 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0114 10:25:39.814090 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [e5277841b152] <==
* I0114 10:24:13.208214 1 serving.go:348] Generated self-signed cert in-memory
W0114 10:24:20.398862 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0114 10:24:20.398929 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0114 10:24:20.398949 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0114 10:24:20.398965 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0114 10:24:20.502211 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0114 10:24:20.502385 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:24:20.505158 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0114 10:24:20.505270 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0114 10:24:20.505304 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 10:24:20.599818 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0114 10:24:20.706374 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 10:25:04.904652 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0114 10:25:04.905093 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E0114 10:25:04.905114 1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
E0114 10:25:04.905626 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Logs begin at Sat 2023-01-14 10:22:38 UTC, end at Sat 2023-01-14 11:02:05 UTC. --
Jan 14 10:26:42 functional-102159 kubelet[10330]: I0114 10:26:42.922888 10330 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0cfba3becc101bb395aed7f49535287442acfd305b87404974d2592b95df3a9f"
Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.004742 10330 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/b8420616-b5fd-42c0-b7a9-c83030278ce5-pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\") pod \"b8420616-b5fd-42c0-b7a9-c83030278ce5\" (UID: \"b8420616-b5fd-42c0-b7a9-c83030278ce5\") "
Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.004934 10330 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8420616-b5fd-42c0-b7a9-c83030278ce5-pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05" (OuterVolumeSpecName: "mypd") pod "b8420616-b5fd-42c0-b7a9-c83030278ce5" (UID: "b8420616-b5fd-42c0-b7a9-c83030278ce5"). InnerVolumeSpecName "pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.005173 10330 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwxxl\" (UniqueName: \"kubernetes.io/projected/b8420616-b5fd-42c0-b7a9-c83030278ce5-kube-api-access-pwxxl\") pod \"b8420616-b5fd-42c0-b7a9-c83030278ce5\" (UID: \"b8420616-b5fd-42c0-b7a9-c83030278ce5\") "
Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.005289 10330 reconciler.go:399] "Volume detached for volume \"pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\" (UniqueName: \"kubernetes.io/host-path/b8420616-b5fd-42c0-b7a9-c83030278ce5-pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\") on node \"functional-102159\" DevicePath \"\""
Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.008486 10330 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8420616-b5fd-42c0-b7a9-c83030278ce5-kube-api-access-pwxxl" (OuterVolumeSpecName: "kube-api-access-pwxxl") pod "b8420616-b5fd-42c0-b7a9-c83030278ce5" (UID: "b8420616-b5fd-42c0-b7a9-c83030278ce5"). InnerVolumeSpecName "kube-api-access-pwxxl". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.106818 10330 reconciler.go:399] "Volume detached for volume \"kube-api-access-pwxxl\" (UniqueName: \"kubernetes.io/projected/b8420616-b5fd-42c0-b7a9-c83030278ce5-kube-api-access-pwxxl\") on node \"functional-102159\" DevicePath \"\""
Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.920470 10330 scope.go:115] "RemoveContainer" containerID="ba990b9f959a302e47b3c89a85b515c85be33c58b9262db28ed033f600d3db8e"
Jan 14 10:27:09 functional-102159 kubelet[10330]: I0114 10:27:09.433759 10330 topology_manager.go:205] "Topology Admit Handler"
Jan 14 10:27:09 functional-102159 kubelet[10330]: E0114 10:27:09.433969 10330 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="b8420616-b5fd-42c0-b7a9-c83030278ce5" containerName="myfrontend"
Jan 14 10:27:09 functional-102159 kubelet[10330]: I0114 10:27:09.434114 10330 memory_manager.go:345] "RemoveStaleState removing state" podUID="b8420616-b5fd-42c0-b7a9-c83030278ce5" containerName="myfrontend"
Jan 14 10:27:09 functional-102159 kubelet[10330]: I0114 10:27:09.519267 10330 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\" (UniqueName: \"kubernetes.io/host-path/836f5e25-6f77-4207-bf2f-01a2a8b4de80-pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\") pod \"sp-pod\" (UID: \"836f5e25-6f77-4207-bf2f-01a2a8b4de80\") " pod="default/sp-pod"
Jan 14 10:27:09 functional-102159 kubelet[10330]: I0114 10:27:09.519443 10330 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxc9m\" (UniqueName: \"kubernetes.io/projected/836f5e25-6f77-4207-bf2f-01a2a8b4de80-kube-api-access-gxc9m\") pod \"sp-pod\" (UID: \"836f5e25-6f77-4207-bf2f-01a2a8b4de80\") " pod="default/sp-pod"
Jan 14 10:27:10 functional-102159 kubelet[10330]: I0114 10:27:10.720194 10330 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b8420616-b5fd-42c0-b7a9-c83030278ce5 path="/var/lib/kubelet/pods/b8420616-b5fd-42c0-b7a9-c83030278ce5/volumes"
Jan 14 10:27:11 functional-102159 kubelet[10330]: I0114 10:27:11.029377 10330 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="900235733b1433ea55b63bae15b58951cdd983dd8b02e6ddf6274319cd0c8a46"
Jan 14 10:27:40 functional-102159 kubelet[10330]: I0114 10:27:40.516852 10330 topology_manager.go:205] "Topology Admit Handler"
Jan 14 10:27:40 functional-102159 kubelet[10330]: I0114 10:27:40.716464 10330 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsb9\" (UniqueName: \"kubernetes.io/projected/49328d8a-bf22-466e-b722-c9d1061506d0-kube-api-access-8hsb9\") pod \"mysql-596b7fcdbf-r5qcr\" (UID: \"49328d8a-bf22-466e-b722-c9d1061506d0\") " pod="default/mysql-596b7fcdbf-r5qcr"
Jan 14 10:27:43 functional-102159 kubelet[10330]: I0114 10:27:43.701968 10330 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="eda543c2273b19508cc70d4b476b5d7188032c446c65d1b4ab96497d71241676"
Jan 14 10:30:31 functional-102159 kubelet[10330]: W0114 10:30:31.039513 10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 14 10:35:31 functional-102159 kubelet[10330]: W0114 10:35:31.040450 10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 14 10:40:31 functional-102159 kubelet[10330]: W0114 10:40:31.044406 10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 14 10:45:31 functional-102159 kubelet[10330]: W0114 10:45:31.047836 10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 14 10:50:31 functional-102159 kubelet[10330]: W0114 10:50:31.050558 10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 14 10:55:31 functional-102159 kubelet[10330]: W0114 10:55:31.049940 10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan 14 11:00:31 functional-102159 kubelet[10330]: W0114 11:00:31.115669 10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [44589eb98582] <==
* I0114 10:25:42.696348 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0114 10:25:42.805823 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0114 10:25:42.805995 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0114 10:26:00.228433 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0114 10:26:00.228747 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cad4d3da-c442-4946-b691-f57647b16439", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-102159_a2cd9e9d-1e7b-4af5-bb82-1f8161b05f29 became leader
I0114 10:26:00.228855 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-102159_a2cd9e9d-1e7b-4af5-bb82-1f8161b05f29!
I0114 10:26:00.330745 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-102159_a2cd9e9d-1e7b-4af5-bb82-1f8161b05f29!
I0114 10:26:21.398353 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0114 10:26:21.399060 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0114 10:26:21.398622 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard c38f9802-a8bf-4487-a640-1f377c5ca0db 372 0 2023-01-14 10:23:34 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-01-14 10:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05 &PersistentVolumeClaim{ObjectMeta:{myclaim default a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05 676 0 2023-01-14 10:26:21 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2023-01-14 10:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2023-01-14 10:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0114 10:26:21.401499 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05" provisioned
I0114 10:26:21.401630 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0114 10:26:21.401644 1 volume_store.go:212] Trying to save persistentvolume "pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05"
I0114 10:26:21.599192 1 volume_store.go:219] persistentvolume "pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05" saved
I0114 10:26:21.599518 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05
*
* ==> storage-provisioner [a6a24bfc1356] <==
* I0114 10:24:41.795629 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0114 10:24:41.817898 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0114 10:24:41.818079 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0114 10:24:59.237792 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0114 10:24:59.238905 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cad4d3da-c442-4946-b691-f57647b16439", APIVersion:"v1", ResourceVersion:"545", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-102159_cf5e345c-831e-4677-8d72-387cb9eeb7b8 became leader
I0114 10:24:59.239489 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-102159_cf5e345c-831e-4677-8d72-387cb9eeb7b8!
I0114 10:24:59.339991 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-102159_cf5e345c-831e-4677-8d72-387cb9eeb7b8!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-102159 -n functional-102159
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-102159 -n functional-102159: (1.624775s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-102159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-102159 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-102159 describe pod : exit status 1 (211.6647ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-102159 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2165.43s)