=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run: kubectl --context functional-20220701224009-7720 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run: kubectl --context functional-20220701224009-7720 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-dz4cz" [cf20cd84-bf07-45ad-85e9-bcc446ae417c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-54c4b5c49f-dz4cz" [cf20cd84-bf07-45ad-85e9-bcc446ae417c] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 18.0784308s
functional_test.go:1448: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220701224009-7720 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220701224009-7720 service list: (3.8276688s)
functional_test.go:1462: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220701224009-7720 service --namespace=default --https --url hello-node
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220701224009-7720 service --namespace=default --https --url hello-node: exit status 1 (32m39.5791625s)
-- stdout --
https://127.0.0.1:64128
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220701224009-7720 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run: kubectl --context functional-20220701224009-7720 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name: hello-node-54c4b5c49f-dz4cz
Namespace: default
Priority: 0
Node: functional-20220701224009-7720/192.168.49.2
Start Time: Fri, 01 Jul 2022 22:47:07 +0000
Labels: app=hello-node
pod-template-hash=54c4b5c49f
Annotations: <none>
Status: Running
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ReplicaSet/hello-node-54c4b5c49f
Containers:
echoserver:
Container ID: docker://1f9592d09ff85a89351019b9568834a478ffe3ca6bf441225a2b945666826195
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 01 Jul 2022 22:47:19 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bk6mq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-bk6mq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-54c4b5c49f-dz4cz to functional-20220701224009-7720
Normal Pulling 32m kubelet, functional-20220701224009-7720 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 32m kubelet, functional-20220701224009-7720 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 7.1320812s
Normal Created 32m kubelet, functional-20220701224009-7720 Created container echoserver
Normal Started 32m kubelet, functional-20220701224009-7720 Started container echoserver
Name: hello-node-connect-578cdc45cb-ktvzx
Namespace: default
Priority: 0
Node: functional-20220701224009-7720/192.168.49.2
Start Time: Fri, 01 Jul 2022 22:46:58 +0000
Labels: app=hello-node-connect
pod-template-hash=578cdc45cb
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-connect-578cdc45cb
Containers:
echoserver:
Container ID: docker://c24b01dec825d318db6b78f0440fa1d9d457c7a108027eed208bd7f34335fc9a
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 01 Jul 2022 22:47:18 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jlhm8 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-jlhm8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-578cdc45cb-ktvzx to functional-20220701224009-7720
Normal Pulling 33m kubelet, functional-20220701224009-7720 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 32m kubelet, functional-20220701224009-7720 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 15.7692181s
Normal Created 32m kubelet, functional-20220701224009-7720 Created container echoserver
Normal Started 32m kubelet, functional-20220701224009-7720 Started container echoserver
functional_test.go:1411: (dbg) Run: kubectl --context functional-20220701224009-7720 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run: kubectl --context functional-20220701224009-7720 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.105.143.60
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32211/TCP
Endpoints: 172.17.0.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-20220701224009-7720
helpers_test.go:235: (dbg) docker inspect functional-20220701224009-7720:
-- stdout --
[
{
"Id": "fdd0791b976d5319810fcf533e0a1ec563d178a6c444b623ffa48dd46320e9a5",
"Created": "2022-07-01T22:40:49.1236761Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 26499,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-07-01T22:40:50.0861752Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
"ResolvConfPath": "/var/lib/docker/containers/fdd0791b976d5319810fcf533e0a1ec563d178a6c444b623ffa48dd46320e9a5/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/fdd0791b976d5319810fcf533e0a1ec563d178a6c444b623ffa48dd46320e9a5/hostname",
"HostsPath": "/var/lib/docker/containers/fdd0791b976d5319810fcf533e0a1ec563d178a6c444b623ffa48dd46320e9a5/hosts",
"LogPath": "/var/lib/docker/containers/fdd0791b976d5319810fcf533e0a1ec563d178a6c444b623ffa48dd46320e9a5/fdd0791b976d5319810fcf533e0a1ec563d178a6c444b623ffa48dd46320e9a5-json.log",
"Name": "/functional-20220701224009-7720",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-20220701224009-7720:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-20220701224009-7720",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/271f189eddccd281539abc8e376a0f4ec97f1fd9478a2ead3a4978a782f0d842-init/diff:/var/lib/docker/overlay2/17f9d244dd472cd6205e7679e00592a18530af8714c7081a3ed529b133a44193/diff:/var/lib/docker/overlay2/b01db57f0be9c401e8c6538cb644f9b4e708558ebd750596d9e8c908d38c68cc/diff:/var/lib/docker/overlay2/5dac2609c1f85324b09f8a74422266a1ec699eb3ab56920ae55bf2342ffd2c84/diff:/var/lib/docker/overlay2/e2bf4f1b0882ab03f49a668c15cfbdc452be633042d55e2e54767235144fca5e/diff:/var/lib/docker/overlay2/7bac63b3396f99aa7359e4576a33936431cf1af9c3b7550b4cc35584a63537e8/diff:/var/lib/docker/overlay2/2fa527990a12b30a48889002badb98136ccf4085f35a5aedb653aea31e6ef482/diff:/var/lib/docker/overlay2/cffeeaa473eacaba4846cf73078f9d47c659d38b9a6cee85d72d85d95521250c/diff:/var/lib/docker/overlay2/025799dfb643ca0afd0e8a9654823798d01b410142cf6cbd1c10d238653b498e/diff:/var/lib/docker/overlay2/7b1648522336c8bf9538e70a0b6ec0b65f48f367ed3f537dc77edae835d3e78b/diff:/var/lib/docker/overlay2/8d2690
7a09ed149f82dedb37c09a922b0b3f866bedd5d7d77fac779863eb22fd/diff:/var/lib/docker/overlay2/af3bde8edebc9d5abb5a8502e626c1d9ad2c81fff9516e83b73f40eb788cee01/diff:/var/lib/docker/overlay2/7c3c2b783fc111558d06c70cd952f51004e739b2c79fe8977bf9d6685db2ba4d/diff:/var/lib/docker/overlay2/1cc9f4d02718faceedb942193b52c67991abb73a89efd945049ca24132e9b320/diff:/var/lib/docker/overlay2/e1d55e9b7fbb239dc507ca218ca2e6e20e0eebb88e4399558eedb4834ce83177/diff:/var/lib/docker/overlay2/4544b5efa5fa74c8a344467cd43923b442d4455415b1b3ef505a9c10ad44a089/diff:/var/lib/docker/overlay2/e0e4b40578167b82882147e9fa0c5aff0bec423e2b33579f28c050fdcdebe5ae/diff:/var/lib/docker/overlay2/305d9d90bc2f528953db3be1228ca0849eb861d5006295db94dc291e6ca1d18e/diff:/var/lib/docker/overlay2/5eae6e92092dcdfebbeb5465e3443d65a6784bd0765fb243089fae86090f5610/diff:/var/lib/docker/overlay2/ffef624ee8b2fe629e0845bff9220eb55143f1de5bc39328c721b809520aa72d/diff:/var/lib/docker/overlay2/8c69b22a1d326c476a5e155459b152828201ebc3396bbf06803dae5bc93716dd/diff:/var/lib/d
ocker/overlay2/9d9c881f5c56cc77a661121b11563aaf455abd493c7c039ffa275d9b080f4cbe/diff:/var/lib/docker/overlay2/5e83e16060aa1707c00a50973f06a4b65619c4cca30ba3d1482cb6d02e4c6136/diff:/var/lib/docker/overlay2/ac6693a3ba8458993f75904e50f3cf9d8792138cef6385d2278d4f7bc8185854/diff:/var/lib/docker/overlay2/ac26b1f3f0c8b0a334bbf56a3a9538dd23abf6733af801a15a8649258e2e3ca3/diff:/var/lib/docker/overlay2/77be4a11d5795df86680b3bae334bedcda38e4520d0af629f7937be8bf450178/diff:/var/lib/docker/overlay2/9c6c2386dc9f27a39696369d1135472bc9c2d470dd14902d9d35e0a4c60f2e02/diff:/var/lib/docker/overlay2/94ba5d0953b970930fcc776654232ae8c9edd66b85d4d0976f2e34139a717903/diff:/var/lib/docker/overlay2/86b47c03ab402cd7f5d2ea8c03c7ccf2da4f21c6a529f1e631b7705d0739b562/diff:/var/lib/docker/overlay2/9e323e888d066d8b8f98218e0256d8bf04f4fb926c5cd7c7f4652fa65f21f01b/diff:/var/lib/docker/overlay2/638f9be131af0f2b0b6e40f9bc85d22641a26ab7353eee4e0b30df8053ba7212/diff:/var/lib/docker/overlay2/ce9256ecfa111e0ef0b61c8cbb9ea68e6e4c84a2aca0d481c76dcf9730e
758df/diff:/var/lib/docker/overlay2/f0efe07d1593dda140dcf939126fa7b22357eb2b1163e28e8fa7bf06cc863db1/diff:/var/lib/docker/overlay2/34649269d43e06817a45a3b4a83279a164a98fc22800531b86000cb6bfb036e0/diff:/var/lib/docker/overlay2/593d934248a4d5daa8ea3572712e176dd5cd874529697fd358088240a2bcb6cc/diff:/var/lib/docker/overlay2/eafd55a90d4037089efe6daf67288c25f484603d9d781568a7fb33670de2b022/diff:/var/lib/docker/overlay2/9d5c978b09f97c98d0bf6cf06499dddffcc039bd7b92aa8bc20c1e678f0bb89b/diff:/var/lib/docker/overlay2/87f1af2e0194eeb9adf134881746c3bdd8dfb7c7df0fff9e647542596aafbb9e/diff:/var/lib/docker/overlay2/7f8e720c8b355b2ff064abb0eade5d533662029fef68024aa5644008cdd84a66/diff:/var/lib/docker/overlay2/005dc9f906769d1cc9362bee5e2dabcd05db22e24ed6531c91eba8a3f5fe4b34/diff:/var/lib/docker/overlay2/d545229b04afb1a3be7c6249d9ba8bb2e84b5b67efb83522da418a12dd5f668b/diff:/var/lib/docker/overlay2/e6556870b313484e6a46a1b94cabe2ed2f3e449b2bae8aae97baeef32988d655/diff",
"MergedDir": "/var/lib/docker/overlay2/271f189eddccd281539abc8e376a0f4ec97f1fd9478a2ead3a4978a782f0d842/merged",
"UpperDir": "/var/lib/docker/overlay2/271f189eddccd281539abc8e376a0f4ec97f1fd9478a2ead3a4978a782f0d842/diff",
"WorkDir": "/var/lib/docker/overlay2/271f189eddccd281539abc8e376a0f4ec97f1fd9478a2ead3a4978a782f0d842/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-20220701224009-7720",
"Source": "/var/lib/docker/volumes/functional-20220701224009-7720/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-20220701224009-7720",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-20220701224009-7720",
"name.minikube.sigs.k8s.io": "functional-20220701224009-7720",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "794d189a1374f1d619211a578e64a6d0c99ad595791e65b5b704a9811736456b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63548"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63544"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63545"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63546"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "63547"
}
]
},
"SandboxKey": "/var/run/docker/netns/794d189a1374",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-20220701224009-7720": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"fdd0791b976d",
"functional-20220701224009-7720"
],
"NetworkID": "c6b6210282521be40708136d1417d541062b42daaa2236d484e164d387b628d3",
"EndpointID": "c4ee86df6bd053321c338aba683d30192f5267d00a9611c18c988f6049a58028",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220701224009-7720 -n functional-20220701224009-7720
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220701224009-7720 -n functional-20220701224009-7720: (3.1081186s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220701224009-7720 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220701224009-7720 logs -n 25: (5.9711373s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
| addons | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:45 GMT | 01 Jul 22 22:45 GMT |
| | addons list -o json | | | | | |
| tunnel | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:45 GMT | |
| | tunnel --alsologtostderr | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:45 GMT | 01 Jul 22 22:45 GMT |
| | image ls | | | | | |
| image | functional-20220701224009-7720 image load --daemon | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:45 GMT | 01 Jul 22 22:46 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220701224009-7720 | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:46 GMT |
| | image ls | | | | | |
| image | functional-20220701224009-7720 image load --daemon | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:46 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220701224009-7720 | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:46 GMT |
| | image ls | | | | | |
| image | functional-20220701224009-7720 image save | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:46 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220701224009-7720 | | | | | |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-20220701224009-7720 image rm | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:46 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220701224009-7720 | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:46 GMT |
| | image ls | | | | | |
| image | functional-20220701224009-7720 image load | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:46 GMT |
| | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:46 GMT |
| | image ls | | | | | |
| image | functional-20220701224009-7720 image save --daemon | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:46 GMT | 01 Jul 22 22:47 GMT |
| | gcr.io/google-containers/addon-resizer:functional-20220701224009-7720 | | | | | |
| update-context | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| service | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | service list | | | | | |
| update-context | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| service | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | |
| | service --namespace=default | | | | | |
| | --https --url hello-node | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | image ls --format short | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | image ls --format yaml | | | | | |
| ssh | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | |
| | ssh pgrep buildkitd | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | image ls --format json | | | | | |
| image | functional-20220701224009-7720 image build -t | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | localhost/my-image:functional-20220701224009-7720 | | | | | |
| | testdata\build | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | image ls --format table | | | | | |
| image | functional-20220701224009-7720 | minikube | minikube8\jenkins | v1.26.0 | 01 Jul 22 22:47 GMT | 01 Jul 22 22:47 GMT |
| | image ls | | | | | |
|----------------|------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/01 22:45:12
Running on machine: minikube8
Binary: Built with gc go1.18.3 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0701 22:45:12.713020 9692 out.go:296] Setting OutFile to fd 968 ...
I0701 22:45:12.777951 9692 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:45:12.777951 9692 out.go:309] Setting ErrFile to fd 640...
I0701 22:45:12.777951 9692 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:45:12.810094 9692 out.go:303] Setting JSON to false
I0701 22:45:12.820958 9692 start.go:115] hostinfo: {"hostname":"minikube8","uptime":5584,"bootTime":1656709928,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
W0701 22:45:12.822002 9692 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0701 22:45:12.824963 9692 out.go:177] * [functional-20220701224009-7720] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0701 22:45:12.835504 9692 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
I0701 22:45:12.840688 9692 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
I0701 22:45:12.843893 9692 out.go:177] - MINIKUBE_LOCATION=14483
I0701 22:45:12.846384 9692 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 22:45:12.849246 9692 config.go:178] Loaded profile config "functional-20220701224009-7720": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:45:12.850520 9692 driver.go:360] Setting default libvirt URI to qemu:///system
I0701 22:45:14.891391 9692 docker.go:137] docker version: linux-20.10.17
I0701 22:45:14.901385 9692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:45:15.509830 9692 info.go:265] docker info: {ID:6JQP:BSZO:ZMDW:XFCZ:GJXY:VIOR:EQHC:RQZU:MBWW:32AB:GFHO:KTCS Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-07-01 22:45:15.0748359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:45:15.516357 9692 out.go:177] * Using the docker driver based on existing profile
I0701 22:45:15.518722 9692 start.go:284] selected driver: docker
I0701 22:45:15.518773 9692 start.go:808] validating driver "docker" against &{Name:functional-20220701224009-7720 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220701224009-7720 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:45:15.519077 9692 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 22:45:15.535437 9692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:45:16.170691 9692 info.go:265] docker info: {ID:6JQP:BSZO:ZMDW:XFCZ:GJXY:VIOR:EQHC:RQZU:MBWW:32AB:GFHO:KTCS Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-07-01 22:45:15.7448872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:45:16.218529 9692 cni.go:95] Creating CNI manager for ""
I0701 22:45:16.218529 9692 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:45:16.218529 9692 start_flags.go:310] config:
{Name:functional-20220701224009-7720 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220701224009-7720 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:fal
se storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:45:16.222669 9692 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Fri 2022-07-01 22:40:50 UTC, end at Fri 2022-07-01 23:20:17 UTC. --
Jul 01 22:44:03 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:03.741488600Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
Jul 01 22:44:03 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:03.741671400Z" level=info msg="Daemon has completed initialization"
Jul 01 22:44:03 functional-20220701224009-7720 systemd[1]: Started Docker Application Container Engine.
Jul 01 22:44:03 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:03.803897400Z" level=info msg="API listen on [::]:2376"
Jul 01 22:44:03 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:03.808825000Z" level=info msg="API listen on /var/run/docker.sock"
Jul 01 22:44:04 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:04.280388700Z" level=error msg="Failed to compute size of container rootfs 3467e7aed14e1607dafd64e7ea6189a821830db6a20a1cf128ab240eae3f2357: mount does not exist"
Jul 01 22:44:04 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:04.620355900Z" level=error msg="ce2d85c55ebed44214821c96c3d94dbb84ed65a027f8a16b6b6c8a1623462497 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:44:04 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:04.672980800Z" level=error msg="47a0c58db5021df70518424d51028910967bbd3eba219c857c97bde4845d60ad cleanup: failed to delete container from containerd: no such container"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.372939200Z" level=info msg="ignoring event" container=2e947f1cc4ba6110151c0564fbfc83454dd4b855bf87394855f019680ed3f84c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.374409500Z" level=info msg="ignoring event" container=dde86bc1fe70046d347385823fae23c0974af69545d3f175c0dd6e7c75ec8ebd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.486939000Z" level=info msg="ignoring event" container=3dc39e1fe1a8349ee2a3419f8997ec1c90510ac8e6abf3f1d3383c629679c3f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.575068400Z" level=info msg="ignoring event" container=1039068ec20ef2d01c133cde842003f31206bced14a5c83a0b62ab24406550c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.575328100Z" level=info msg="ignoring event" container=39b1a8a5009fc51139f1c15a290e3cc9409061ba026aa905375ee4471a342faa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.579865700Z" level=info msg="ignoring event" container=771574448f5605482638001f7c1dd889cf7c6bb795fd395c71ffcbe6d66c878f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.580577000Z" level=info msg="ignoring event" container=2bb214ed5093fb9b5b4974fb131e4c82f9bd877ed1fb082ecba081f20fbd8748 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.673566500Z" level=info msg="ignoring event" container=489458f9725350a755b8ec24ca225420837629c1aa6642b7194cb672926b34e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:11 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:11.686368600Z" level=info msg="ignoring event" container=b4135d171acac2ec4534d59f4061a2a809aa8b2ac78f9e50dfb38299d8678124 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:12 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:12.388444000Z" level=error msg="4a6718bc02d8280987433054643e8856a77784074f592df6d608bae100341455 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:44:21 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:21.378348200Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=dd5a9235002aaa6ffaffe25e7666399e700ba6d7eec544e8bbf3f36b14347de6
Jul 01 22:44:21 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:21.483744600Z" level=info msg="ignoring event" container=dd5a9235002aaa6ffaffe25e7666399e700ba6d7eec544e8bbf3f36b14347de6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:44:35 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:44:35.277445500Z" level=info msg="ignoring event" container=5e7efe8439bfe92d7f5b184b4b753ba156924130d4aad67eab1806902f9dfd79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:47:03 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:47:03.107635700Z" level=info msg="ignoring event" container=be3b3a8588ba4f462a7b4947be08a0fd2cdf94dc9dd6f2fa2603ebde34f78a95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:47:03 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:47:03.549963300Z" level=info msg="ignoring event" container=c6e738698d8fabbf13686cb3efda67e3fb62d02bd411b550b2c97b585f6a4cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:47:42 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:47:42.150509400Z" level=info msg="ignoring event" container=ace319288142f1c6e0ed4c78d1428011edf143308c2fdf3b3be81ef81b8e011e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:47:42 functional-20220701224009-7720 dockerd[8602]: time="2022-07-01T22:47:42.731754500Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
1f9592d09ff85 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 33 minutes ago Running echoserver 0 ab099e89dda25
487b88f7c80ad nginx@sha256:10f14ffa93f8dedf1057897b745e5ac72ac5655c299dade0aa434c71557697ea 33 minutes ago Running myfrontend 0 53d6c1ffe6842
c24b01dec825d k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 33 minutes ago Running echoserver 0 16b425bb05a0e
e6374fbd62c83 nginx@sha256:8e38930f0390cbd79b2d1528405fb17edcda5f4a30875ecf338ebaa598dc994e 34 minutes ago Running nginx 0 e0fefac1a012b
83c74bd835b60 mysql@sha256:8b4b41d530c40d77a3205c53f7ecf1026d735648d9a09777845f305953e5eff5 34 minutes ago Running mysql 0 4e7ca239b924d
7a386311c3481 6e38f40d628db 35 minutes ago Running storage-provisioner 4 f8e3d0b4cc74e
105aff0a6ab67 a4ca41631cc7a 35 minutes ago Running coredns 4 beec73634d9b4
8c86f310ce954 a634548d10b03 35 minutes ago Running kube-proxy 4 4faabfdd4dc9c
49c9989287e1d d3377ffb7177c 35 minutes ago Running kube-apiserver 0 308ff29363457
e8b92e279fffb 5d725196c1f47 35 minutes ago Running kube-scheduler 3 24153b3f252b1
431cc8a2eda74 34cdf99b1bb3b 35 minutes ago Running kube-controller-manager 4 ccb5899b98038
0623a1b1d2a08 aebe758cef4cd 35 minutes ago Running etcd 4 4d279df9a192d
dd5a9235002aa a4ca41631cc7a 36 minutes ago Exited coredns 3 2bb214ed5093f
b4135d171acac aebe758cef4cd 36 minutes ago Exited etcd 3 1039068ec20ef
2e947f1cc4ba6 5d725196c1f47 36 minutes ago Exited kube-scheduler 2 771574448f560
47a0c58db5021 34cdf99b1bb3b 36 minutes ago Created kube-controller-manager 3 8a5b0a9eaa858
ce2d85c55ebed a634548d10b03 36 minutes ago Created kube-proxy 3 a02c78f5ec718
ade67959e5c37 6e38f40d628db 37 minutes ago Exited storage-provisioner 3 6131cae13d8b0
*
* ==> coredns [105aff0a6ab6] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> coredns [dd5a9235002a] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] plugin/health: Going into lameduck mode for 5s
[ERROR] plugin/errors: 2 5977793392727603694.6165643118760102689. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
*
* ==> describe nodes <==
* Name: functional-20220701224009-7720
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20220701224009-7720
kubernetes.io/os=linux
minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
minikube.k8s.io/name=functional-20220701224009-7720
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_07_01T22_41_27_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 01 Jul 2022 22:41:22 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20220701224009-7720
AcquireTime: <unset>
RenewTime: Fri, 01 Jul 2022 23:20:13 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 01 Jul 2022 23:18:42 +0000 Fri, 01 Jul 2022 22:41:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 01 Jul 2022 23:18:42 +0000 Fri, 01 Jul 2022 22:41:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 01 Jul 2022 23:18:42 +0000 Fri, 01 Jul 2022 22:41:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 01 Jul 2022 23:18:42 +0000 Fri, 01 Jul 2022 22:41:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-20220701224009-7720
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: bbe1e1cef6e940328962dca52b3c5731
System UUID: bbe1e1cef6e940328962dca52b3c5731
Boot ID: 6959cb6a-0b88-42c6-8d42-129c0675f426
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.2
Kube-Proxy Version: v1.24.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-54c4b5c49f-dz4cz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default hello-node-connect-578cdc45cb-ktvzx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default mysql-67f7d69d8b-l59pm 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 35m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
kube-system coredns-6d4b75cb6d-zlllt 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-20220701224009-7720 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-20220701224009-7720 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35m
kube-system kube-controller-manager-functional-20220701224009-7720 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-khf6l 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-20220701224009-7720 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 35m kube-proxy
Normal Starting 37m kube-proxy
Normal Starting 38m kube-proxy
Normal NodeHasSufficientMemory 39m (x6 over 39m) kubelet Node functional-20220701224009-7720 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x6 over 39m) kubelet Node functional-20220701224009-7720 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 39m (x6 over 39m) kubelet Node functional-20220701224009-7720 status is now: NodeHasSufficientPID
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 38m kubelet Node functional-20220701224009-7720 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-20220701224009-7720 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-20220701224009-7720 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-20220701224009-7720 status is now: NodeReady
Normal RegisteredNode 38m node-controller Node functional-20220701224009-7720 event: Registered Node functional-20220701224009-7720 in Controller
Normal RegisteredNode 37m node-controller Node functional-20220701224009-7720 event: Registered Node functional-20220701224009-7720 in Controller
Normal Starting 35m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 35m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 35m (x8 over 35m) kubelet Node functional-20220701224009-7720 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 35m (x8 over 35m) kubelet Node functional-20220701224009-7720 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 35m (x7 over 35m) kubelet Node functional-20220701224009-7720 status is now: NodeHasSufficientPID
Normal RegisteredNode 35m node-controller Node functional-20220701224009-7720 event: Registered Node functional-20220701224009-7720 in Controller
*
* ==> dmesg <==
* [Jul 1 22:55] WSL2: Performing memory compaction.
[Jul 1 22:56] WSL2: Performing memory compaction.
[Jul 1 22:57] WSL2: Performing memory compaction.
[Jul 1 22:58] WSL2: Performing memory compaction.
[Jul 1 22:59] WSL2: Performing memory compaction.
[Jul 1 23:00] WSL2: Performing memory compaction.
[Jul 1 23:01] WSL2: Performing memory compaction.
[Jul 1 23:02] WSL2: Performing memory compaction.
[Jul 1 23:03] WSL2: Performing memory compaction.
[Jul 1 23:04] WSL2: Performing memory compaction.
[Jul 1 23:05] WSL2: Performing memory compaction.
[Jul 1 23:06] WSL2: Performing memory compaction.
[Jul 1 23:07] WSL2: Performing memory compaction.
[Jul 1 23:08] WSL2: Performing memory compaction.
[Jul 1 23:09] WSL2: Performing memory compaction.
[Jul 1 23:10] WSL2: Performing memory compaction.
[Jul 1 23:11] WSL2: Performing memory compaction.
[Jul 1 23:12] WSL2: Performing memory compaction.
[Jul 1 23:13] WSL2: Performing memory compaction.
[Jul 1 23:14] WSL2: Performing memory compaction.
[Jul 1 23:15] WSL2: Performing memory compaction.
[Jul 1 23:16] WSL2: Performing memory compaction.
[Jul 1 23:17] WSL2: Performing memory compaction.
[Jul 1 23:18] WSL2: Performing memory compaction.
[Jul 1 23:19] WSL2: Performing memory compaction.
*
* ==> etcd [0623a1b1d2a0] <==
* {"level":"info","ts":"2022-07-01T22:47:04.884Z","caller":"traceutil/trace.go:171","msg":"trace[965259605] range","detail":"{range_begin:/registry/pods/default/sp-pod; range_end:; response_count:0; response_revision:821; }","duration":"187.7969ms","start":"2022-07-01T22:47:04.696Z","end":"2022-07-01T22:47:04.884Z","steps":["trace[965259605] 'range keys from in-memory index tree' (duration: 179.2311ms)"],"step_count":1}
{"level":"info","ts":"2022-07-01T22:47:07.983Z","caller":"traceutil/trace.go:171","msg":"trace[1021194534] linearizableReadLoop","detail":"{readStateIndex:923; appliedIndex:921; }","duration":"104.1534ms","start":"2022-07-01T22:47:07.879Z","end":"2022-07-01T22:47:07.983Z","steps":["trace[1021194534] 'read index received' (duration: 40.3035ms)","trace[1021194534] 'applied index is now lower than readState.Index' (duration: 63.8462ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:47:07.983Z","caller":"traceutil/trace.go:171","msg":"trace[965920655] transaction","detail":"{read_only:false; response_revision:836; number_of_response:1; }","duration":"108.1209ms","start":"2022-07-01T22:47:07.875Z","end":"2022-07-01T22:47:07.983Z","steps":["trace[965920655] 'process raft request' (duration: 44.1405ms)","trace[965920655] 'compare' (duration: 63.3766ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:47:07.983Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.6911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/hello-node-54c4b5c49f-dz4cz\" ","response":"range_response_count:1 size:1584"}
{"level":"info","ts":"2022-07-01T22:47:07.983Z","caller":"traceutil/trace.go:171","msg":"trace[2038927943] range","detail":"{range_begin:/registry/pods/default/hello-node-54c4b5c49f-dz4cz; range_end:; response_count:1; response_revision:837; }","duration":"104.7793ms","start":"2022-07-01T22:47:07.878Z","end":"2022-07-01T22:47:07.983Z","steps":["trace[2038927943] 'agreement among raft nodes before linearized reading' (duration: 104.6095ms)"],"step_count":1}
{"level":"info","ts":"2022-07-01T22:47:07.984Z","caller":"traceutil/trace.go:171","msg":"trace[555112047] transaction","detail":"{read_only:false; response_revision:837; number_of_response:1; }","duration":"108.7078ms","start":"2022-07-01T22:47:07.875Z","end":"2022-07-01T22:47:07.984Z","steps":["trace[555112047] 'process raft request' (duration: 107.7002ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-01T22:47:08.182Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.4299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:372"}
{"level":"info","ts":"2022-07-01T22:47:08.182Z","caller":"traceutil/trace.go:171","msg":"trace[1567013303] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:838; }","duration":"106.6542ms","start":"2022-07-01T22:47:08.075Z","end":"2022-07-01T22:47:08.182Z","steps":["trace[1567013303] 'agreement among raft nodes before linearized reading' (duration: 20.7135ms)","trace[1567013303] 'range keys from in-memory index tree' (duration: 85.6662ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:47:08.183Z","caller":"traceutil/trace.go:171","msg":"trace[1312470546] transaction","detail":"{read_only:false; response_revision:839; number_of_response:1; }","duration":"104.1199ms","start":"2022-07-01T22:47:08.078Z","end":"2022-07-01T22:47:08.183Z","steps":["trace[1312470546] 'process raft request' (duration: 17.9785ms)","trace[1312470546] 'compare' (duration: 85.1493ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:47:08.196Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.9259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-07-01T22:47:08.196Z","caller":"traceutil/trace.go:171","msg":"trace[1575438989] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:840; }","duration":"114.098ms","start":"2022-07-01T22:47:08.082Z","end":"2022-07-01T22:47:08.196Z","steps":["trace[1575438989] 'agreement among raft nodes before linearized reading' (duration: 113.8936ms)"],"step_count":1}
{"level":"info","ts":"2022-07-01T22:54:30.298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
{"level":"info","ts":"2022-07-01T22:54:30.300Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":969,"took":"1.4317ms"}
{"level":"info","ts":"2022-07-01T22:59:30.315Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1179}
{"level":"info","ts":"2022-07-01T22:59:30.316Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1179,"took":"518.2µs"}
{"level":"info","ts":"2022-07-01T23:04:30.354Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1389}
{"level":"info","ts":"2022-07-01T23:04:30.356Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1389,"took":"549.1µs"}
{"level":"info","ts":"2022-07-01T23:09:30.368Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1598}
{"level":"info","ts":"2022-07-01T23:09:30.369Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1598,"took":"507µs"}
{"level":"warn","ts":"2022-07-01T23:11:48.692Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.9382ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2022-07-01T23:11:48.692Z","caller":"traceutil/trace.go:171","msg":"trace[926837432] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:1903; }","duration":"107.268ms","start":"2022-07-01T23:11:48.585Z","end":"2022-07-01T23:11:48.692Z","steps":["trace[926837432] 'count revisions from in-memory index tree' (duration: 106.5641ms)"],"step_count":1}
{"level":"info","ts":"2022-07-01T23:14:30.384Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1808}
{"level":"info","ts":"2022-07-01T23:14:30.385Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1808,"took":"664.9µs"}
{"level":"info","ts":"2022-07-01T23:19:30.400Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2017}
{"level":"info","ts":"2022-07-01T23:19:30.401Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2017,"took":"594.6µs"}
*
* ==> etcd [b4135d171aca] <==
* {"level":"info","ts":"2022-07-01T22:44:08.387Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-07-01T22:44:08.388Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-07-01T22:44:08.388Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-07-01T22:44:09.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
{"level":"info","ts":"2022-07-01T22:44:09.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
{"level":"info","ts":"2022-07-01T22:44:09.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
{"level":"info","ts":"2022-07-01T22:44:09.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
{"level":"info","ts":"2022-07-01T22:44:09.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-07-01T22:44:09.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
{"level":"info","ts":"2022-07-01T22:44:09.387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-07-01T22:44:09.391Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220701224009-7720 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-07-01T22:44:09.391Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-01T22:44:09.392Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-01T22:44:09.393Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-07-01T22:44:09.393Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-07-01T22:44:09.394Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-07-01T22:44:09.394Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2022-07-01T22:44:11.374Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-07-01T22:44:11.374Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-20220701224009-7720","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/07/01 22:44:11 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/07/01 22:44:11 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-07-01T22:44:11.377Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-07-01T22:44:11.472Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-07-01T22:44:11.474Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-07-01T22:44:11.474Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-20220701224009-7720","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> kernel <==
* 23:20:19 up 57 min, 0 users, load average: 0.71, 0.46, 0.61
Linux functional-20220701224009-7720 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [49c9989287e1] <==
* Trace[519856097]: [870.9723ms] [870.9723ms] END
I0701 22:45:39.469018 1 trace.go:205] Trace[267738815]: "List(recursive=true) etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (01-Jul-2022 22:45:38.708) (total time: 760ms):
Trace[267738815]: [760.684ms] [760.684ms] END
I0701 22:45:39.470675 1 trace.go:205] Trace[1514580622]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:0ca92464-b17c-420a-aa42-690f3973d705,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jul-2022 22:45:38.708) (total time: 762ms):
Trace[1514580622]: ---"Listing from storage done" 761ms (22:45:39.469)
Trace[1514580622]: [762.435ms] [762.435ms] END
I0701 22:45:44.079730 1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.109.192.42]
I0701 22:45:44.080047 1 trace.go:205] Trace[1379896789]: "Create" url:/api/v1/namespaces/default/services,user-agent:kubectl.exe/v1.18.2 (windows/amd64) kubernetes/52c56ce,audit-id:63449b53-9b89-44a4-b319-ff5cfd704ad8,client:192.168.49.1,accept:application/json,protocol:HTTP/2.0 (01-Jul-2022 22:45:43.495) (total time: 584ms):
Trace[1379896789]: ---"Object stored in database" 583ms (22:45:44.079)
Trace[1379896789]: [584.8613ms] [584.8613ms] END
I0701 22:46:55.651508 1 trace.go:205] Trace[1022645226]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:17051b2b-fcb5-44ab-80d7-1252b99a350a,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Jul-2022 22:46:53.806) (total time: 1844ms):
Trace[1022645226]: ---"Object stored in database" 1844ms (22:46:55.651)
Trace[1022645226]: [1.8446267s] [1.8446267s] END
I0701 22:46:55.695236 1 trace.go:205] Trace[772909796]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:9b5bd04c-8a91-43c7-b885-4e4eeb731ba4,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (01-Jul-2022 22:46:54.186) (total time: 1508ms):
Trace[772909796]: ---"About to write a response" 1508ms (22:46:55.695)
Trace[772909796]: [1.5084733s] [1.5084733s] END
I0701 22:46:55.695881 1 trace.go:205] Trace[1364575187]: "List(recursive=true) etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (01-Jul-2022 22:46:54.092) (total time: 1602ms):
Trace[1364575187]: [1.6028209s] [1.6028209s] END
I0701 22:46:55.696648 1 trace.go:205] Trace[24769166]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:35aa9183-6f40-4d2f-a928-620533e41821,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jul-2022 22:46:54.092) (total time: 1603ms):
Trace[24769166]: ---"Listing from storage done" 1603ms (22:46:55.695)
Trace[24769166]: [1.6037146s] [1.6037146s] END
I0701 22:46:59.280136 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.101.95.253]
I0701 22:47:08.277169 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.105.143.60]
W0701 22:58:01.464042 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
W0701 23:10:04.471198 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
*
* ==> kube-controller-manager [431cc8a2eda7] <==
* I0701 22:44:50.474013 1 range_allocator.go:173] Starting range CIDR allocator
I0701 22:44:50.474040 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0701 22:44:50.474061 1 shared_informer.go:262] Caches are synced for cidrallocator
I0701 22:44:50.475267 1 shared_informer.go:262] Caches are synced for daemon sets
I0701 22:44:50.475839 1 shared_informer.go:262] Caches are synced for TTL
I0701 22:44:50.482491 1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0701 22:44:50.494052 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0701 22:44:50.572428 1 shared_informer.go:262] Caches are synced for deployment
I0701 22:44:50.572454 1 shared_informer.go:262] Caches are synced for resource quota
I0701 22:44:50.594422 1 shared_informer.go:262] Caches are synced for disruption
I0701 22:44:50.594525 1 disruption.go:371] Sending events to api server.
I0701 22:44:50.596874 1 shared_informer.go:262] Caches are synced for resource quota
I0701 22:44:50.619855 1 shared_informer.go:262] Caches are synced for attach detach
I0701 22:44:50.639350 1 shared_informer.go:262] Caches are synced for persistent volume
I0701 22:44:51.084034 1 shared_informer.go:262] Caches are synced for garbage collector
I0701 22:44:51.110706 1 shared_informer.go:262] Caches are synced for garbage collector
I0701 22:44:51.111005 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0701 22:45:16.576529 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-67f7d69d8b to 1"
I0701 22:45:16.704364 1 event.go:294] "Event occurred" object="default/mysql-67f7d69d8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-67f7d69d8b-l59pm"
I0701 22:46:26.791745 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0701 22:46:26.791920 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0701 22:46:58.791842 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-578cdc45cb to 1"
I0701 22:46:58.802100 1 event.go:294] "Event occurred" object="default/hello-node-connect-578cdc45cb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-578cdc45cb-ktvzx"
I0701 22:47:07.588535 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54c4b5c49f to 1"
I0701 22:47:07.681699 1 event.go:294] "Event occurred" object="default/hello-node-54c4b5c49f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54c4b5c49f-dz4cz"
*
* ==> kube-controller-manager [47a0c58db502] <==
*
*
* ==> kube-proxy [8c86f310ce95] <==
* I0701 22:44:37.172890 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0701 22:44:37.176076 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0701 22:44:37.178926 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0701 22:44:37.181589 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0701 22:44:37.185248 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I0701 22:44:37.203169 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0701 22:44:37.203295 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0701 22:44:37.203347 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0701 22:44:37.392506 1 server_others.go:206] "Using iptables Proxier"
I0701 22:44:37.392643 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0701 22:44:37.392659 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0701 22:44:37.392674 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0701 22:44:37.392703 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0701 22:44:37.393190 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0701 22:44:37.393492 1 server.go:661] "Version info" version="v1.24.2"
I0701 22:44:37.393504 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 22:44:37.394103 1 config.go:317] "Starting service config controller"
I0701 22:44:37.394224 1 shared_informer.go:255] Waiting for caches to sync for service config
I0701 22:44:37.394380 1 config.go:226] "Starting endpoint slice config controller"
I0701 22:44:37.394390 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0701 22:44:37.394406 1 config.go:444] "Starting node config controller"
I0701 22:44:37.394436 1 shared_informer.go:255] Waiting for caches to sync for node config
I0701 22:44:37.496718 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0701 22:44:37.496749 1 shared_informer.go:262] Caches are synced for service config
I0701 22:44:37.496807 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [ce2d85c55ebe] <==
*
*
* ==> kube-scheduler [2e947f1cc4ba] <==
* E0701 22:44:10.428341 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.425782 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.428436 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.427643 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.429654 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.429798 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.429985 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.427632 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.430419 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.426531 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.429693 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.430539 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.430561 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.430688 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.430705 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.430710 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.430586 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
W0701 22:44:10.430031 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.430254 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
E0701 22:44:10.431330 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
I0701 22:44:11.206344 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0701 22:44:11.206904 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0701 22:44:11.206935 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0701 22:44:11.207144 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E0701 22:44:11.207210 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [e8b92e279fff] <==
* I0701 22:44:29.015047 1 serving.go:348] Generated self-signed cert in-memory
W0701 22:44:34.379499 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0701 22:44:34.379703 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0701 22:44:34.379728 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0701 22:44:34.379744 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0701 22:44:34.672943 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
I0701 22:44:34.673073 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 22:44:34.675938 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0701 22:44:34.676089 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0701 22:44:34.676220 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0701 22:44:34.679387 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0701 22:44:34.777598 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Fri 2022-07-01 22:40:50 UTC, end at Fri 2022-07-01 23:20:19 UTC. --
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:04.300449 10748 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96deeedf-1357-4d19-a789-b0678e2f29ef-pvc-585a91f1-47cc-4cac-b76c-975309462ec3" (OuterVolumeSpecName: "mypd") pod "96deeedf-1357-4d19-a789-b0678e2f29ef" (UID: "96deeedf-1357-4d19-a789-b0678e2f29ef"). InnerVolumeSpecName "pvc-585a91f1-47cc-4cac-b76c-975309462ec3". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:04.312515 10748 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96deeedf-1357-4d19-a789-b0678e2f29ef-kube-api-access-7fpkj" (OuterVolumeSpecName: "kube-api-access-7fpkj") pod "96deeedf-1357-4d19-a789-b0678e2f29ef" (UID: "96deeedf-1357-4d19-a789-b0678e2f29ef"). InnerVolumeSpecName "kube-api-access-7fpkj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:04.387907 10748 scope.go:110] "RemoveContainer" containerID="be3b3a8588ba4f462a7b4947be08a0fd2cdf94dc9dd6f2fa2603ebde34f78a95"
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:04.403076 10748 reconciler.go:312] "Volume detached for volume \"pvc-585a91f1-47cc-4cac-b76c-975309462ec3\" (UniqueName: \"kubernetes.io/host-path/96deeedf-1357-4d19-a789-b0678e2f29ef-pvc-585a91f1-47cc-4cac-b76c-975309462ec3\") on node \"functional-20220701224009-7720\" DevicePath \"\""
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:04.403256 10748 reconciler.go:312] "Volume detached for volume \"kube-api-access-7fpkj\" (UniqueName: \"kubernetes.io/projected/96deeedf-1357-4d19-a789-b0678e2f29ef-kube-api-access-7fpkj\") on node \"functional-20220701224009-7720\" DevicePath \"\""
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:04.693142 10748 scope.go:110] "RemoveContainer" containerID="be3b3a8588ba4f462a7b4947be08a0fd2cdf94dc9dd6f2fa2603ebde34f78a95"
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: E0701 22:47:04.776340 10748 remote_runtime.go:578] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: be3b3a8588ba4f462a7b4947be08a0fd2cdf94dc9dd6f2fa2603ebde34f78a95" containerID="be3b3a8588ba4f462a7b4947be08a0fd2cdf94dc9dd6f2fa2603ebde34f78a95"
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:04.776527 10748 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:be3b3a8588ba4f462a7b4947be08a0fd2cdf94dc9dd6f2fa2603ebde34f78a95} err="failed to get container status \"be3b3a8588ba4f462a7b4947be08a0fd2cdf94dc9dd6f2fa2603ebde34f78a95\": rpc error: code = Unknown desc = Error: No such container: be3b3a8588ba4f462a7b4947be08a0fd2cdf94dc9dd6f2fa2603ebde34f78a95"
Jul 01 22:47:04 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:04.989520 10748 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=96deeedf-1357-4d19-a789-b0678e2f29ef path="/var/lib/kubelet/pods/96deeedf-1357-4d19-a789-b0678e2f29ef/volumes"
Jul 01 22:47:05 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:05.582643 10748 topology_manager.go:200] "Topology Admit Handler"
Jul 01 22:47:05 functional-20220701224009-7720 kubelet[10748]: E0701 22:47:05.582866 10748 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="96deeedf-1357-4d19-a789-b0678e2f29ef" containerName="myfrontend"
Jul 01 22:47:05 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:05.582972 10748 memory_manager.go:345] "RemoveStaleState removing state" podUID="96deeedf-1357-4d19-a789-b0678e2f29ef" containerName="myfrontend"
Jul 01 22:47:05 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:05.692612 10748 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdt82\" (UniqueName: \"kubernetes.io/projected/2e18d7d5-5714-48c6-a262-30d24b99b8a5-kube-api-access-sdt82\") pod \"sp-pod\" (UID: \"2e18d7d5-5714-48c6-a262-30d24b99b8a5\") " pod="default/sp-pod"
Jul 01 22:47:05 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:05.692857 10748 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-585a91f1-47cc-4cac-b76c-975309462ec3\" (UniqueName: \"kubernetes.io/host-path/2e18d7d5-5714-48c6-a262-30d24b99b8a5-pvc-585a91f1-47cc-4cac-b76c-975309462ec3\") pod \"sp-pod\" (UID: \"2e18d7d5-5714-48c6-a262-30d24b99b8a5\") " pod="default/sp-pod"
Jul 01 22:47:07 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:07.789804 10748 topology_manager.go:200] "Topology Admit Handler"
Jul 01 22:47:08 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:08.075588 10748 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk6mq\" (UniqueName: \"kubernetes.io/projected/cf20cd84-bf07-45ad-85e9-bcc446ae417c-kube-api-access-bk6mq\") pod \"hello-node-54c4b5c49f-dz4cz\" (UID: \"cf20cd84-bf07-45ad-85e9-bcc446ae417c\") " pod="default/hello-node-54c4b5c49f-dz4cz"
Jul 01 22:47:09 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:09.678278 10748 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="53d6c1ffe6842592c1014bf3f3defbb4ba1f29118b864a909537fb63a452f3c6"
Jul 01 22:47:11 functional-20220701224009-7720 kubelet[10748]: I0701 22:47:11.675611 10748 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ab099e89dda25307bf11b968037a629138ee51e15a0257165bd3696158fb089c"
Jul 01 22:49:25 functional-20220701224009-7720 kubelet[10748]: W0701 22:49:25.109435 10748 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jul 01 22:54:25 functional-20220701224009-7720 kubelet[10748]: W0701 22:54:25.108484 10748 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jul 01 22:59:25 functional-20220701224009-7720 kubelet[10748]: W0701 22:59:25.113293 10748 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jul 01 23:04:25 functional-20220701224009-7720 kubelet[10748]: W0701 23:04:25.115120 10748 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jul 01 23:09:25 functional-20220701224009-7720 kubelet[10748]: W0701 23:09:25.117712 10748 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jul 01 23:14:25 functional-20220701224009-7720 kubelet[10748]: W0701 23:14:25.117247 10748 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jul 01 23:19:25 functional-20220701224009-7720 kubelet[10748]: W0701 23:19:25.120779 10748 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [7a386311c348] <==
* I0701 22:44:36.775743 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0701 22:44:36.976358 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0701 22:44:36.976455 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0701 22:44:54.635086 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0701 22:44:54.635438 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220701224009-7720_e38636d0-43f1-4665-8e6c-f0b0dd2509e6!
I0701 22:44:54.635438 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5b6350e-636c-4622-b436-7da8b155cc46", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220701224009-7720_e38636d0-43f1-4665-8e6c-f0b0dd2509e6 became leader
I0701 22:44:54.736060 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220701224009-7720_e38636d0-43f1-4665-8e6c-f0b0dd2509e6!
I0701 22:46:26.792191 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0701 22:46:26.792529 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 3b41cf1f-fe95-401f-8c72-d549d3f9d762 383 0 2022-07-01 22:41:46 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-07-01 22:41:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-585a91f1-47cc-4cac-b76c-975309462ec3 &PersistentVolumeClaim{ObjectMeta:{myclaim default 585a91f1-47cc-4cac-b76c-975309462ec3 751 0 2022-07-01 22:46:26 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-07-01 22:46:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-07-01 22:46:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0701 22:46:26.793032 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"585a91f1-47cc-4cac-b76c-975309462ec3", APIVersion:"v1", ResourceVersion:"751", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0701 22:46:26.793728 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-585a91f1-47cc-4cac-b76c-975309462ec3" provisioned
I0701 22:46:26.793861 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0701 22:46:26.793870 1 volume_store.go:212] Trying to save persistentvolume "pvc-585a91f1-47cc-4cac-b76c-975309462ec3"
I0701 22:46:26.880645 1 volume_store.go:219] persistentvolume "pvc-585a91f1-47cc-4cac-b76c-975309462ec3" saved
I0701 22:46:26.881330 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"585a91f1-47cc-4cac-b76c-975309462ec3", APIVersion:"v1", ResourceVersion:"751", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-585a91f1-47cc-4cac-b76c-975309462ec3
*
* ==> storage-provisioner [ade67959e5c3] <==
* I0701 22:42:58.239065 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0701 22:42:58.294453 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0701 22:42:58.294609 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0701 22:43:15.849759 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0701 22:43:15.850017 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5b6350e-636c-4622-b436-7da8b155cc46", APIVersion:"v1", ResourceVersion:"539", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220701224009-7720_8bce126c-88f2-42b5-b43c-b51b7c7497e9 became leader
I0701 22:43:15.850095 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220701224009-7720_8bce126c-88f2-42b5-b43c-b51b7c7497e9!
I0701 22:43:15.950509 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220701224009-7720_8bce126c-88f2-42b5-b43c-b51b7c7497e9!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220701224009-7720 -n functional-20220701224009-7720
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220701224009-7720 -n functional-20220701224009-7720: (3.1901256s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-20220701224009-7720 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-20220701224009-7720 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220701224009-7720 describe pod : exit status 1 (186.9898ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-20220701224009-7720 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (1996.66s)