=== RUN TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run: kubectl --context functional-20220629181245-2408 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run: kubectl --context functional-20220629181245-2408 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-7pm4f" [5a35bec7-0a31-421a-98b3-2ac8fb3946dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
=== CONT TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-7pm4f" [5a35bec7-0a31-421a-98b3-2ac8fb3946dc] Running
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.1030223s
functional_test.go:1448: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service list
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service list: (7.2822054s)
functional_test.go:1462: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service --namespace=default --https --url hello-node
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows
=== CONT TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service --namespace=default --https --url hello-node: exit status 1 (32m26.5469225s)
-- stdout --
https://127.0.0.1:53406
-- /stdout --
** stderr **
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run: kubectl --context functional-20220629181245-2408 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name: hello-node-54c4b5c49f-7pm4f
Namespace: default
Priority: 0
Node: functional-20220629181245-2408/192.168.49.2
Start Time: Wed, 29 Jun 2022 18:20:02 +0000
Labels: app=hello-node
pod-template-hash=54c4b5c49f
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/hello-node-54c4b5c49f
Containers:
echoserver:
Container ID: docker://a38ab82ec10c4e95af4b7651e032bffd9a4dadc58e74e0c5200616d6476c99af
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 29 Jun 2022 18:20:04 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4ncwx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-4ncwx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-54c4b5c49f-7pm4f to functional-20220629181245-2408
Normal Pulled 32m kubelet, functional-20220629181245-2408 Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal Created 32m kubelet, functional-20220629181245-2408 Created container echoserver
Normal Started 32m kubelet, functional-20220629181245-2408 Started container echoserver
Name: hello-node-connect-578cdc45cb-m2pgx
Namespace: default
Priority: 0
Node: functional-20220629181245-2408/192.168.49.2
Start Time: Wed, 29 Jun 2022 18:19:27 +0000
Labels: app=hello-node-connect
pod-template-hash=578cdc45cb
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/hello-node-connect-578cdc45cb
Containers:
echoserver:
Container ID: docker://9c2d58554a4c20ee271f9f165e72a7614362ca65826ee89ce8333a2d8423e82b
Image: k8s.gcr.io/echoserver:1.8
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 29 Jun 2022 18:19:54 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w544q (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-w544q:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/hello-node-connect-578cdc45cb-m2pgx to functional-20220629181245-2408
Normal Pulling 33m kubelet, functional-20220629181245-2408 Pulling image "k8s.gcr.io/echoserver:1.8"
Normal Pulled 32m kubelet, functional-20220629181245-2408 Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 20.9877549s
Normal Created 32m kubelet, functional-20220629181245-2408 Created container echoserver
Normal Started 32m kubelet, functional-20220629181245-2408 Started container echoserver
functional_test.go:1411: (dbg) Run: kubectl --context functional-20220629181245-2408 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run: kubectl --context functional-20220629181245-2408 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name: hello-node
Namespace: default
Labels: app=hello-node
Annotations: <none>
Selector: app=hello-node
Type: NodePort
IP: 10.107.130.255
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30735/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-20220629181245-2408
helpers_test.go:231: (dbg) Done: docker inspect functional-20220629181245-2408: (1.1016731s)
helpers_test.go:235: (dbg) docker inspect functional-20220629181245-2408:
-- stdout --
[
{
"Id": "212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa",
"Created": "2022-06-29T18:13:37.710266Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 26189,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-06-29T18:13:38.7099878Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
"ResolvConfPath": "/var/lib/docker/containers/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa/hostname",
"HostsPath": "/var/lib/docker/containers/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa/hosts",
"LogPath": "/var/lib/docker/containers/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa-json.log",
"Name": "/functional-20220629181245-2408",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-20220629181245-2408:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-20220629181245-2408",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4194304000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/ed66b2c038ad7d217ff416edf7875e311de0b4e899660b5392b47906501bf6e3-init/diff:/var/lib/docker/overlay2/18fa2cfa420a1e80c1aefe4442db7e09e685eb6b69d30a3f812abf0fcd5b1ce8/diff:/var/lib/docker/overlay2/aba3e90b0a8f2d7eefad9e62fda91f6713fdc48732352c033f55a5f2fb9d5051/diff:/var/lib/docker/overlay2/6e33975e7a2b5eb470d2cc19f49dfd8506b5158029ca29518653c9de49149fa9/diff:/var/lib/docker/overlay2/0037ae946c15e22839a7ac209758f6fe7b71b326b867a9ce04ec676f5b8c06a6/diff:/var/lib/docker/overlay2/9c42624bebccf9152803eaad763e31ce035bdcdde0f54bfd5c88b9bb436d8327/diff:/var/lib/docker/overlay2/84521428dc63a36c9c8e902e4a72309e30edf7ca74fce9fc847a1f0322dbc53f/diff:/var/lib/docker/overlay2/7d7f88709e16b5aae440b1e298e370c888250af23e45a901effd41cf24361c60/diff:/var/lib/docker/overlay2/ec651b0921231e96280abd101a8af5a63c74f75e5393917c7c51a4779e8c18ee/diff:/var/lib/docker/overlay2/de54eba4af17491eb746d5d519e3e9d2209bb7e77a7e5e97a3fed0e5222cc91b/diff:/var/lib/docker/overlay2/41f2a6
c56ef2a3c6f7de181184e718ecb06cff24d2f3067f95f7609c8428890c/diff:/var/lib/docker/overlay2/dadd972d4b0ae7e16296c1fd2116b2362dcd68c94ca80683b16746f9f9af4c04/diff:/var/lib/docker/overlay2/d07ed1db13541e2b4edaad932df907a36057e8115f039c12379e1f4bd9358fcc/diff:/var/lib/docker/overlay2/2ea8ed9010b183040dd8663549244a49bf69bebdb52d48dcdfab8bb80ae569e9/diff:/var/lib/docker/overlay2/2b3aef18028ba313056c34b21dd2fe925b2a075b71ce79d4a700666a4a1294f3/diff:/var/lib/docker/overlay2/4f2c4fada74eb6f2253a2e6e3e69366c21a0e146314e507111b068a94431e118/diff:/var/lib/docker/overlay2/eebb16c3252fcc56a8f29f6f4cc140749f09d91c7618992ef26ccd17bc7326a8/diff:/var/lib/docker/overlay2/4c9fb9630f6a81f45d6683e4b35bd45b802de81702df3682376cd5eded2c6293/diff:/var/lib/docker/overlay2/7966fc785bbb93b70572461c2b75d02d408e500cbfbe9fb28a85610069e53048/diff:/var/lib/docker/overlay2/7ccd830d8272e56eb8af3cb67fd85111a15d0bd24740b16d9820d03e8b5e613b/diff:/var/lib/docker/overlay2/98991bbd08d46d706f89f20373025f42bb1eca28599ab9d368327d28d37da3e5/diff:/var/lib/d
ocker/overlay2/acd0db325a9cc956c00473cca2cbe9e8938e54e309f7812ce96651505f2c026c/diff:/var/lib/docker/overlay2/a952097255f1545a148e11dc183ed9d457d086b3b19e6cf5c0a84d334a7868fa/diff:/var/lib/docker/overlay2/3626d83dcea23ee4d5fc8d381d865b85ce85b9ec935c11ea07472acad97752b1/diff:/var/lib/docker/overlay2/daee0769e25b6c6df3644bb280d7cdd0552baeddf00f478d726753a2f02990c4/diff:/var/lib/docker/overlay2/5884da0a2a1c8a365253922fefbfd2861b15e6174a57089f167640bf48fab86d/diff:/var/lib/docker/overlay2/aee5fb879dea6a5dd5d33604a38bc85cef9e6fac8b91d40eae0556920c1f013e/diff:/var/lib/docker/overlay2/329fa8dc36977e4ddcb0c9d5de68a736a48555759a67cc6b901c51a7d20bc940/diff:/var/lib/docker/overlay2/a83adaf465d339d1dbbb19a8e721a3af2dde845d387565c2c23aa55a2a9b3050/diff:/var/lib/docker/overlay2/1eb7c3b1832b132fab8951130f1f3e7525eb849dfd649d730922f192509da8c7/diff:/var/lib/docker/overlay2/63900a7721a42c32f20ce20e83aa0648dc9f1f96e2c44c60b949cdf2ed635b89/diff:/var/lib/docker/overlay2/b1f2b06276b16264d5e1dd74a450a8433b30f118bdbda62a0be9806dc63
962c9/diff:/var/lib/docker/overlay2/1d38b89baa7faea58d17550171f82f91e22823fa4687739f8e96012ba2d6b8bc/diff:/var/lib/docker/overlay2/6a845f21cdc42782d41ea29b6b6d28d87b17e628822711a35d5986ee5327afe9/diff:/var/lib/docker/overlay2/e64e9638ae14983a1fadb7196cfb18b26908f27c9c025d014d1b3e014fe592f2/diff:/var/lib/docker/overlay2/0e4706537848c7cd84366bdfaa32b5a3c84c900772b5ad83d1dfd507ddbfe686/diff:/var/lib/docker/overlay2/2bfed1b7470b0df7e7cb5905c1d1671735c6755b541e4f951e007994f0a090d8/diff:/var/lib/docker/overlay2/a337d8b9854e844eac3af70f23116c0353d23cc66cd2075e17b2f5c4daeb3a54/diff:/var/lib/docker/overlay2/ad5d4a9033e102f17440a355bb241c3e635b435132d5d3b83f45c913c2b142b9/diff:/var/lib/docker/overlay2/533adabc16e60531543e7123b3eb7c5db5a5d5c8b3ff5c5a58a357f9ce9b92a4/diff:/var/lib/docker/overlay2/004d9874f6692e521883f825fdecfdbc36c21b5776c23879841b4718d1b9f2ab/diff:/var/lib/docker/overlay2/aada4d8989429d7ecea61233a031b78c69ebd53862529973a4c5a3f581e5b2dd/diff",
"MergedDir": "/var/lib/docker/overlay2/ed66b2c038ad7d217ff416edf7875e311de0b4e899660b5392b47906501bf6e3/merged",
"UpperDir": "/var/lib/docker/overlay2/ed66b2c038ad7d217ff416edf7875e311de0b4e899660b5392b47906501bf6e3/diff",
"WorkDir": "/var/lib/docker/overlay2/ed66b2c038ad7d217ff416edf7875e311de0b4e899660b5392b47906501bf6e3/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-20220629181245-2408",
"Source": "/var/lib/docker/volumes/functional-20220629181245-2408/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-20220629181245-2408",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-20220629181245-2408",
"name.minikube.sigs.k8s.io": "functional-20220629181245-2408",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "a270067982d790ed610e5bd163d8862e3483ce4b2983a8f340ab49ff87ede8b7",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "53084"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "53085"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "53086"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "53087"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "53088"
}
]
},
"SandboxKey": "/var/run/docker/netns/a270067982d7",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-20220629181245-2408": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"212bfdec5401",
"functional-20220629181245-2408"
],
"NetworkID": "da3406170d3a0abd5dd8a6d823daa27e6ae73eddab3c356fd71e6fdc35be0102",
"EndpointID": "556c4ed84bdd5c0f8652aefd3ff33363fd7c859c70f4777622e9e4d70d8d1bd9",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220629181245-2408 -n functional-20220629181245-2408
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220629181245-2408 -n functional-20220629181245-2408: (6.7446379s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 logs -n 25: (8.0991036s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs:
-- stdout --
*
* ==> Audit <==
* |----------------|-----------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|-----------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
| start | -p | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:20 GMT | |
| | functional-20220629181245-2408 | | | | | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| dashboard | --url --port 36195 -p | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:20 GMT | |
| | functional-20220629181245-2408 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| start | -p | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:20 GMT | |
| | functional-20220629181245-2408 | | | | | |
| | --dry-run --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
| | ssh sudo cat | | | | | |
| | /etc/test/nested/copy/2408/hosts | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
| | ssh sudo cat | | | | | |
| | /etc/ssl/certs/2408.pem | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
| | ssh sudo cat | | | | | |
| | /usr/share/ca-certificates/2408.pem | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
| | ssh sudo cat | | | | | |
| | /etc/ssl/certs/51391683.0 | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
| | ssh sudo cat | | | | | |
| | /etc/ssl/certs/24082.pem | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
| | ssh sudo cat | | | | | |
| | /usr/share/ca-certificates/24082.pem | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
| | ssh sudo cat | | | | | |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | |
| | ssh sudo systemctl is-active | | | | | |
| | crio | | | | | |
| cp | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
| | cp testdata\cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
| | ssh -n | | | | | |
| | functional-20220629181245-2408 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-20220629181245-2408 cp functional-20220629181245-2408:/home/docker/cp-test.txt | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
| | C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalparallelCpCmd3231594891\001\cp-test.txt | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
| | ssh -n | | | | | |
| | functional-20220629181245-2408 | | | | | |
| | sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| image | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
| | image ls --format short | | | | | |
| image | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
| | image ls --format yaml | | | | | |
| ssh | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | |
| | ssh pgrep buildkitd | | | | | |
| image | functional-20220629181245-2408 image build -t | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:23 GMT |
| | localhost/my-image:functional-20220629181245-2408 | | | | | |
| | testdata\build | | | | | |
| image | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
| | image ls | | | | | |
| image | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
| | image ls --format json | | | | | |
| image | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
| | image ls --format table | | | | | |
| update-context | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-20220629181245-2408 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
|----------------|-----------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/06/29 18:20:56
Running on machine: minikube8
Binary: Built with gc go1.18.3 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0629 18:20:56.631224 2708 out.go:296] Setting OutFile to fd 756 ...
I0629 18:20:56.699895 2708 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0629 18:20:56.699895 2708 out.go:309] Setting ErrFile to fd 684...
I0629 18:20:56.699895 2708 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0629 18:20:56.723887 2708 out.go:303] Setting JSON to false
I0629 18:20:56.726519 2708 start.go:115] hostinfo: {"hostname":"minikube8","uptime":19419,"bootTime":1656507437,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
W0629 18:20:56.726519 2708 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0629 18:20:56.731366 2708 out.go:177] * [functional-20220629181245-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0629 18:20:56.740898 2708 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
I0629 18:20:56.743506 2708 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
I0629 18:20:56.747540 2708 out.go:177] - MINIKUBE_LOCATION=14420
I0629 18:20:56.749869 2708 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0629 18:20:56.753197 2708 config.go:178] Loaded profile config "functional-20220629181245-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0629 18:20:56.755522 2708 driver.go:360] Setting default libvirt URI to qemu:///system
I0629 18:20:59.491006 2708 docker.go:137] docker version: linux-20.10.16
I0629 18:20:59.499120 2708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0629 18:21:01.593392 2708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0941095s)
I0629 18:21:01.594134 2708 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-29 18:21:00.5383184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0629 18:21:01.602783 2708 out.go:177] * Using the docker driver based on existing profile
I0629 18:21:01.604768 2708 start.go:284] selected driver: docker
I0629 18:21:01.604972 2708 start.go:808] validating driver "docker" against &{Name:functional-20220629181245-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629181245-2408 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0629 18:21:01.604972 2708 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0629 18:21:01.619960 2708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0629 18:21:03.730073 2708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1101s)
I0629 18:21:03.730073 2708 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-29 18:21:02.6700909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0629 18:21:03.782833 2708 cni.go:95] Creating CNI manager for ""
I0629 18:21:03.782833 2708 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0629 18:21:03.782833 2708 start_flags.go:310] config:
{Name:functional-20220629181245-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629181245-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:fal
se storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0629 18:21:03.793917 2708 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Logs begin at Wed 2022-06-29 18:13:39 UTC, end at Wed 2022-06-29 18:53:01 UTC. --
Jun 29 18:17:48 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:48.945766100Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 1a68ab8e986055d9ab693ac55d5c7207f0b00da3209804278e2b708257684c72 01c1830a95dbe3e2a78e968510049b9989b2ace82a6ced9151741389f604930f], retrying...."
Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.034518600Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.152076800Z" level=info msg="Loading containers: done."
Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.219808200Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.219994600Z" level=info msg="Daemon has completed initialization"
Jun 29 18:17:49 functional-20220629181245-2408 systemd[1]: Started Docker Application Container Engine.
Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.279743100Z" level=info msg="API listen on [::]:2376"
Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.288555900Z" level=info msg="API listen on /var/run/docker.sock"
Jun 29 18:17:53 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:53.971619800Z" level=info msg="ignoring event" container=8f862c41e32a1e21645ee4c2db1d64a064d9a03762b23c242f679af9e93b40cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.574576300Z" level=info msg="ignoring event" container=5b9368e46d90e8fb14de6b75aedb4d217ad8cf517fa4b5ef651d6f31a0e8e2d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.773474800Z" level=info msg="ignoring event" container=6c9072f796a5a708f84743a6314b3fcb40d0afa747f5e5d6fbb3b9a78d3f79bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.774108100Z" level=info msg="ignoring event" container=289f59e187075b0c810ef72d68ee48eb24e87feda6a4a6edb13d3d16af6649c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.774632200Z" level=info msg="ignoring event" container=3b70cc5d916be588602612cdff754fc3e4042e3c5bc3968261ac6dde3c5b7b0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.774944900Z" level=info msg="ignoring event" container=b08c43c862782125e7d28ebf26c863e3abb878fd29832eef710d0b771b93f1bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.777080200Z" level=info msg="ignoring event" container=a28b12885d0afb1e744df3ece228ce9b49a309a0b8db289f0061d443806d54e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.872019200Z" level=info msg="ignoring event" container=ad13f7d80fedf872cf420e2d6a618e3a29b7e66611eafb84a9c32bfd9bab5acd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.872120900Z" level=info msg="ignoring event" container=6091ef84ba649b00e6db52f5d1b8952b75a3c0b3992bf8adb80895973cff8cac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.978529700Z" level=info msg="ignoring event" container=2a1307e43ecd4e53245717b4da9456c40b8aedca6c604a9a0e20cb5c45847262 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:17:59 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.984580300Z" level=info msg="ignoring event" container=8a221bc76ab3fce407a351706ce1ac0b37f825a9888f573170ccd58e112a1464 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:18:03 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:18:03.598832300Z" level=info msg="ignoring event" container=fa1705cd6426848b08c61ffb695cc71b61cabb2c4713f1a00a3c43c110412250 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:18:16 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:18:16.803467300Z" level=info msg="ignoring event" container=aa917099ec7c6a97dd3e3e8aacfdba3ef4364a3956197909fdf96d9fc653533b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:19:47 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:19:47.493092900Z" level=info msg="ignoring event" container=8eb2d8391ccfaea6ca7c6f63d9fec05f02003fa857df481a8c4f055337d64693 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:19:48 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:19:48.371800000Z" level=info msg="ignoring event" container=49c0a39c294942be8e71067ced35dd5b512aba2a554db28270ed23817fd73bb3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:23:02 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:23:02.872319200Z" level=info msg="ignoring event" container=ff29731e5d65b82348c0b0d5cb902a5d72cf8c69b8e51a1ad887c2d9c0cb1202 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:23:03 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:23:03.560444400Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
598bfc8199ffb mysql@sha256:8b4b41d530c40d77a3205c53f7ecf1026d735648d9a09777845f305953e5eff5 30 minutes ago Running mysql 0 9fbc6aee81bd1
a38ab82ec10c4 82e4c8a736a4f 32 minutes ago Running echoserver 0 af2618c2f2906
9b0480c6c94e7 nginx@sha256:10f14ffa93f8dedf1057897b745e5ac72ac5655c299dade0aa434c71557697ea 33 minutes ago Running myfrontend 0 19704fb3610eb
9c2d58554a4c2 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 33 minutes ago Running echoserver 0 f3790d8f70348
254d7e77ab7b5 nginx@sha256:8e38930f0390cbd79b2d1528405fb17edcda5f4a30875ecf338ebaa598dc994e 33 minutes ago Running nginx 0 7b769a4d97e84
22c1c4f4f76fa 6e38f40d628db 34 minutes ago Running storage-provisioner 4 635aa34b11d4a
249f22d600739 a634548d10b03 34 minutes ago Running kube-proxy 4 a9f396c4a971d
d562c25fbc4e8 a4ca41631cc7a 34 minutes ago Running coredns 3 6b2242aa8182f
3df956d56f8ad d3377ffb7177c 34 minutes ago Running kube-apiserver 0 bf61a429495f6
5b56e5f429943 aebe758cef4cd 34 minutes ago Running etcd 4 e8beefefd6c28
b45d5ee5a7f3e 34cdf99b1bb3b 34 minutes ago Running kube-controller-manager 3 b5d261e3b379d
d24ee2d8aa4bc 5d725196c1f47 34 minutes ago Running kube-scheduler 4 3a8892b491e09
8a221bc76ab3f aebe758cef4cd 35 minutes ago Exited etcd 3 5b9368e46d90e
8f862c41e32a1 6e38f40d628db 35 minutes ago Exited storage-provisioner 3 b08c43c862782
ad13f7d80fedf a634548d10b03 35 minutes ago Exited kube-proxy 3 6c9072f796a5a
2a1307e43ecd4 34cdf99b1bb3b 35 minutes ago Exited kube-controller-manager 2 a28b12885d0af
fa1705cd64268 a4ca41631cc7a 35 minutes ago Exited coredns 2 6091ef84ba649
ede947b24c2c7 5d725196c1f47 35 minutes ago Exited kube-scheduler 3 7d40673c18eb7
*
* ==> coredns [d562c25fbc4e] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> coredns [fa1705cd6426] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: functional-20220629181245-2408
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20220629181245-2408
kubernetes.io/os=linux
minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
minikube.k8s.io/name=functional-20220629181245-2408
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_06_29T18_14_33_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 29 Jun 2022 18:14:29 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20220629181245-2408
AcquireTime: <unset>
RenewTime: Wed, 29 Jun 2022 18:52:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 29 Jun 2022 18:48:49 +0000 Wed, 29 Jun 2022 18:14:29 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 29 Jun 2022 18:48:49 +0000 Wed, 29 Jun 2022 18:14:29 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 29 Jun 2022 18:48:49 +0000 Wed, 29 Jun 2022 18:14:29 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 29 Jun 2022 18:48:49 +0000 Wed, 29 Jun 2022 18:14:44 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-20220629181245-2408
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: bbe1e1cef6e940328962dca52b3c5731
System UUID: bbe1e1cef6e940328962dca52b3c5731
Boot ID: 3343ff08-5090-4fcc-990d-809e76a24666
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.2
Kube-Proxy Version: v1.24.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-54c4b5c49f-7pm4f 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32m
default hello-node-connect-578cdc45cb-m2pgx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
default mysql-67f7d69d8b-b2279 600m (3%!)(MISSING) 700m (4%!)(MISSING) 512Mi (0%!)(MISSING) 700Mi (1%!)(MISSING) 31m
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 33m
kube-system coredns-6d4b75cb6d-8wtrf 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 38m
kube-system etcd-functional-20220629181245-2408 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-apiserver-functional-20220629181245-2408 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34m
kube-system kube-controller-manager-functional-20220629181245-2408 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-proxy-xnr8l 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system kube-scheduler-functional-20220629181245-2408 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (8%!)(MISSING) 700m (4%!)(MISSING)
memory 682Mi (1%!)(MISSING) 870Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 34m kube-proxy
Normal Starting 37m kube-proxy
Normal Starting 38m kube-proxy
Normal NodeHasSufficientMemory 38m (x6 over 38m) kubelet Node functional-20220629181245-2408 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m (x6 over 38m) kubelet Node functional-20220629181245-2408 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m (x6 over 38m) kubelet Node functional-20220629181245-2408 status is now: NodeHasSufficientPID
Normal Starting 38m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 38m kubelet Node functional-20220629181245-2408 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 38m kubelet Node functional-20220629181245-2408 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 38m kubelet Node functional-20220629181245-2408 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 38m kubelet Node functional-20220629181245-2408 status is now: NodeReady
Normal RegisteredNode 38m node-controller Node functional-20220629181245-2408 event: Registered Node functional-20220629181245-2408 in Controller
Normal RegisteredNode 36m node-controller Node functional-20220629181245-2408 event: Registered Node functional-20220629181245-2408 in Controller
Normal Starting 34m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 34m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 34m (x8 over 34m) kubelet Node functional-20220629181245-2408 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 34m (x8 over 34m) kubelet Node functional-20220629181245-2408 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 34m (x7 over 34m) kubelet Node functional-20220629181245-2408 status is now: NodeHasSufficientPID
Normal RegisteredNode 34m node-controller Node functional-20220629181245-2408 event: Registered Node functional-20220629181245-2408 in Controller
*
* ==> dmesg <==
* [Jun29 18:28] WSL2: Performing memory compaction.
[Jun29 18:29] WSL2: Performing memory compaction.
[Jun29 18:30] WSL2: Performing memory compaction.
[Jun29 18:31] WSL2: Performing memory compaction.
[Jun29 18:32] WSL2: Performing memory compaction.
[Jun29 18:33] WSL2: Performing memory compaction.
[Jun29 18:34] WSL2: Performing memory compaction.
[Jun29 18:35] WSL2: Performing memory compaction.
[Jun29 18:36] WSL2: Performing memory compaction.
[Jun29 18:37] WSL2: Performing memory compaction.
[Jun29 18:38] WSL2: Performing memory compaction.
[Jun29 18:39] WSL2: Performing memory compaction.
[Jun29 18:40] WSL2: Performing memory compaction.
[Jun29 18:41] WSL2: Performing memory compaction.
[Jun29 18:42] WSL2: Performing memory compaction.
[Jun29 18:43] WSL2: Performing memory compaction.
[Jun29 18:44] WSL2: Performing memory compaction.
[Jun29 18:45] WSL2: Performing memory compaction.
[Jun29 18:46] WSL2: Performing memory compaction.
[Jun29 18:47] WSL2: Performing memory compaction.
[Jun29 18:48] WSL2: Performing memory compaction.
[Jun29 18:49] WSL2: Performing memory compaction.
[Jun29 18:50] WSL2: Performing memory compaction.
[Jun29 18:51] WSL2: Performing memory compaction.
[Jun29 18:52] WSL2: Performing memory compaction.
*
* ==> etcd [5b56e5f42994] <==
* {"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:22:37.484Z","time spent":"1.1514841s","remote":"127.0.0.1:41498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1157,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:22:37.967Z","time spent":"668.713ms","remote":"127.0.0.1:41484","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true "}
{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.722173s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13595"}
{"level":"info","ts":"2022-06-29T18:22:38.636Z","caller":"traceutil/trace.go:171","msg":"trace[1090498900] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:977; }","duration":"1.7222338s","start":"2022-06-29T18:22:36.914Z","end":"2022-06-29T18:22:38.636Z","steps":["trace[1090498900] 'range keys from in-memory index tree' (duration: 1.721868s)"],"step_count":1}
{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:22:36.914Z","time spent":"1.7222991s","remote":"127.0.0.1:41502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13619,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.3621525s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-29T18:22:38.636Z","caller":"traceutil/trace.go:171","msg":"trace[1325742955] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:977; }","duration":"1.3628992s","start":"2022-06-29T18:22:37.273Z","end":"2022-06-29T18:22:38.636Z","steps":["trace[1325742955] 'range keys from in-memory index tree' (duration: 1.3620724s)"],"step_count":1}
{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.1559039s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-29T18:22:38.637Z","caller":"traceutil/trace.go:171","msg":"trace[1211758340] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:977; }","duration":"1.1568057s","start":"2022-06-29T18:22:37.480Z","end":"2022-06-29T18:22:38.637Z","steps":["trace[1211758340] 'range keys from in-memory index tree' (duration: 1.1557784s)"],"step_count":1}
{"level":"warn","ts":"2022-06-29T18:22:38.637Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:22:37.480Z","time spent":"1.1568767s","remote":"127.0.0.1:41522","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2022-06-29T18:28:11.575Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1009}
{"level":"info","ts":"2022-06-29T18:28:11.669Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1009,"took":"93.0258ms"}
{"level":"info","ts":"2022-06-29T18:29:25.773Z","caller":"traceutil/trace.go:171","msg":"trace[953015035] linearizableReadLoop","detail":"{readStateIndex:1473; appliedIndex:1473; }","duration":"197.3427ms","start":"2022-06-29T18:29:25.575Z","end":"2022-06-29T18:29:25.773Z","steps":["trace[953015035] 'read index received' (duration: 197.329ms)","trace[953015035] 'applied index is now lower than readState.Index' (duration: 9.3µs)"],"step_count":2}
{"level":"warn","ts":"2022-06-29T18:29:25.785Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"209.4074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2022-06-29T18:29:25.785Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.6091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-06-29T18:29:25.785Z","caller":"traceutil/trace.go:171","msg":"trace[1809912113] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1270; }","duration":"209.5852ms","start":"2022-06-29T18:29:25.575Z","end":"2022-06-29T18:29:25.785Z","steps":["trace[1809912113] 'agreement among raft nodes before linearized reading' (duration: 197.4971ms)"],"step_count":1}
{"level":"info","ts":"2022-06-29T18:29:25.785Z","caller":"traceutil/trace.go:171","msg":"trace[1900282098] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1270; }","duration":"110.6865ms","start":"2022-06-29T18:29:25.674Z","end":"2022-06-29T18:29:25.785Z","steps":["trace[1900282098] 'agreement among raft nodes before linearized reading' (duration: 98.6382ms)"],"step_count":1}
{"level":"info","ts":"2022-06-29T18:33:11.601Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1219}
{"level":"info","ts":"2022-06-29T18:33:11.602Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1219,"took":"771.4µs"}
{"level":"info","ts":"2022-06-29T18:38:11.619Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1428}
{"level":"info","ts":"2022-06-29T18:38:11.620Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1428,"took":"512.2µs"}
{"level":"info","ts":"2022-06-29T18:43:11.634Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1637}
{"level":"info","ts":"2022-06-29T18:43:11.636Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1637,"took":"825.5µs"}
{"level":"info","ts":"2022-06-29T18:48:11.649Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1848}
{"level":"info","ts":"2022-06-29T18:48:11.650Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1848,"took":"658.3µs"}
*
* ==> etcd [8a221bc76ab3] <==
* {"level":"info","ts":"2022-06-29T18:17:55.473Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-06-29T18:17:55.474Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-29T18:17:55.474Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
{"level":"info","ts":"2022-06-29T18:17:56.588Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220629181245-2408 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-29T18:17:56.588Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-29T18:17:56.588Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-29T18:17:56.594Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-29T18:17:56.592Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-29T18:17:56.595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-06-29T18:17:56.596Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"info","ts":"2022-06-29T18:17:58.575Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-06-29T18:17:58.575Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-20220629181245-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
WARNING: 2022/06/29 18:17:58 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/06/29 18:17:58 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-06-29T18:17:58.579Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2022-06-29T18:17:58.683Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-29T18:17:58.685Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2022-06-29T18:17:58.685Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-20220629181245-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
*
* ==> kernel <==
* 18:53:01 up 1:00, 0 users, load average: 1.34, 0.59, 0.60
Linux functional-20220629181245-2408 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [3df956d56f8a] <==
* I0629 18:18:36.765763 1 controller.go:611] quota admission added evaluator for: endpoints
I0629 18:18:54.805309 1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.101.22.4]
I0629 18:18:54.889923 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0629 18:19:27.091943 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0629 18:19:27.876891 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.104.31.193]
I0629 18:20:02.770497 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.107.130.255]
I0629 18:21:56.775484 1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.99.3.49]
I0629 18:22:30.912893 1 trace.go:205] Trace[464298855]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:94e1f486-30ee-403d-9c75-8e2dbd5b6305,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (29-Jun-2022 18:22:30.272) (total time: 640ms):
Trace[464298855]: ---"About to write a response" 639ms (18:22:30.912)
Trace[464298855]: [640.0547ms] [640.0547ms] END
I0629 18:22:30.913261 1 trace.go:205] Trace[1639686775]: "List(recursive=true) etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (29-Jun-2022 18:22:29.902) (total time: 1010ms):
Trace[1639686775]: [1.0104576s] [1.0104576s] END
I0629 18:22:30.913734 1 trace.go:205] Trace[377805006]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:70b46666-5bea-4db2-b276-95178f0c4918,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (29-Jun-2022 18:22:29.902) (total time: 1010ms):
Trace[377805006]: ---"Listing from storage done" 1010ms (18:22:30.913)
Trace[377805006]: [1.0109662s] [1.0109662s] END
I0629 18:22:38.638406 1 trace.go:205] Trace[1742180193]: "List(recursive=true) etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (29-Jun-2022 18:22:36.913) (total time: 1725ms):
Trace[1742180193]: [1.7250993s] [1.7250993s] END
I0629 18:22:38.638408 1 trace.go:205] Trace[1455204933]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:a77fbf62-ebcf-4aed-bfa4-a29e26785ca1,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (29-Jun-2022 18:22:37.483) (total time: 1154ms):
Trace[1455204933]: ---"About to write a response" 1153ms (18:22:38.637)
Trace[1455204933]: [1.1541912s] [1.1541912s] END
I0629 18:22:38.639614 1 trace.go:205] Trace[1554511105]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:b5dc64e7-cb2d-49c9-bbb2-e93dc85cf6f9,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (29-Jun-2022 18:22:36.913) (total time: 1726ms):
Trace[1554511105]: ---"Listing from storage done" 1725ms (18:22:38.638)
Trace[1554511105]: [1.7263635s] [1.7263635s] END
W0629 18:35:38.429939 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
W0629 18:44:05.619131 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
*
* ==> kube-controller-manager [2a1307e43ecd] <==
* I0629 18:17:55.310243 1 serving.go:348] Generated self-signed cert in-memory
I0629 18:17:56.700986 1 controllermanager.go:180] Version: v1.24.2
I0629 18:17:56.701122 1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0629 18:17:56.703797 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0629 18:17:56.703940 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0629 18:17:56.703997 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0629 18:17:56.704110 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
*
* ==> kube-controller-manager [b45d5ee5a7f3] <==
* I0629 18:18:32.469978 1 shared_informer.go:262] Caches are synced for PVC protection
I0629 18:18:32.470023 1 shared_informer.go:262] Caches are synced for GC
I0629 18:18:32.470100 1 shared_informer.go:262] Caches are synced for attach detach
I0629 18:18:32.470143 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0629 18:18:32.469913 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0629 18:18:32.470260 1 event.go:294] "Event occurred" object="functional-20220629181245-2408" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220629181245-2408 event: Registered Node functional-20220629181245-2408 in Controller"
I0629 18:18:32.470371 1 shared_informer.go:262] Caches are synced for job
I0629 18:18:32.470383 1 shared_informer.go:262] Caches are synced for HPA
I0629 18:18:32.470106 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0629 18:18:32.470926 1 shared_informer.go:262] Caches are synced for deployment
I0629 18:18:32.471767 1 shared_informer.go:262] Caches are synced for ReplicationController
I0629 18:18:32.476069 1 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0629 18:18:32.485317 1 shared_informer.go:262] Caches are synced for resource quota
I0629 18:18:32.572262 1 shared_informer.go:262] Caches are synced for resource quota
I0629 18:18:33.069023 1 shared_informer.go:262] Caches are synced for garbage collector
I0629 18:18:33.069069 1 shared_informer.go:262] Caches are synced for garbage collector
I0629 18:18:33.069198 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0629 18:19:14.069008 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0629 18:19:14.069123 1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0629 18:19:27.202522 1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-578cdc45cb to 1"
I0629 18:19:27.473188 1 event.go:294] "Event occurred" object="default/hello-node-connect-578cdc45cb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-578cdc45cb-m2pgx"
I0629 18:20:02.412242 1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54c4b5c49f to 1"
I0629 18:20:02.490404 1 event.go:294] "Event occurred" object="default/hello-node-54c4b5c49f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54c4b5c49f-7pm4f"
I0629 18:21:56.811786 1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-67f7d69d8b to 1"
I0629 18:21:56.973069 1 event.go:294] "Event occurred" object="default/mysql-67f7d69d8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-67f7d69d8b-b2279"
*
* ==> kube-proxy [249f22d60073] <==
* I0629 18:18:19.390561 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0629 18:18:19.393001 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0629 18:18:19.395414 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0629 18:18:19.397973 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0629 18:18:19.401391 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
I0629 18:18:19.481814 1 node.go:163] Successfully retrieved node IP: 192.168.49.2
I0629 18:18:19.481936 1 server_others.go:138] "Detected node IP" address="192.168.49.2"
I0629 18:18:19.482088 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0629 18:18:19.678168 1 server_others.go:206] "Using iptables Proxier"
I0629 18:18:19.678365 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0629 18:18:19.678382 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0629 18:18:19.678397 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0629 18:18:19.678424 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0629 18:18:19.678843 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0629 18:18:19.681793 1 server.go:661] "Version info" version="v1.24.2"
I0629 18:18:19.681917 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0629 18:18:19.686912 1 config.go:226] "Starting endpoint slice config controller"
I0629 18:18:19.687050 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0629 18:18:19.687100 1 config.go:444] "Starting node config controller"
I0629 18:18:19.687123 1 shared_informer.go:255] Waiting for caches to sync for node config
I0629 18:18:19.687113 1 config.go:317] "Starting service config controller"
I0629 18:18:19.687174 1 shared_informer.go:255] Waiting for caches to sync for service config
I0629 18:18:19.869002 1 shared_informer.go:262] Caches are synced for node config
I0629 18:18:19.869105 1 shared_informer.go:262] Caches are synced for service config
I0629 18:18:19.869126 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-proxy [ad13f7d80fed] <==
* E0629 18:17:53.802512 1 proxier.go:657] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
I0629 18:17:53.873341 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0629 18:17:53.881080 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0629 18:17:53.883649 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0629 18:17:53.886482 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0629 18:17:53.889538 1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0629 18:17:53.893344 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220629181245-2408": dial tcp 192.168.49.2:8441: connect: connection refused
E0629 18:17:54.968525 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220629181245-2408": dial tcp 192.168.49.2:8441: connect: connection refused
E0629 18:17:57.263184 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220629181245-2408": dial tcp 192.168.49.2:8441: connect: connection refused
*
* ==> kube-scheduler [d24ee2d8aa4b] <==
* I0629 18:18:10.683371 1 serving.go:348] Generated self-signed cert in-memory
W0629 18:18:16.069048 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0629 18:18:16.069207 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0629 18:18:16.069231 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0629 18:18:16.069248 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0629 18:18:16.180359 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
I0629 18:18:16.180482 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0629 18:18:16.183071 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0629 18:18:16.183162 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0629 18:18:16.183166 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0629 18:18:16.183228 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0629 18:18:16.369625 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [ede947b24c2c] <==
* I0629 18:17:36.029632 1 serving.go:348] Generated self-signed cert in-memory
W0629 18:17:47.208677 1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
W0629 18:17:47.208830 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0629 18:17:47.208843 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
*
* ==> kubelet <==
* -- Logs begin at Wed 2022-06-29 18:13:39 UTC, end at Wed 2022-06-29 18:53:02 UTC. --
Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.676756 10903 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80abb961-e8fb-4b10-888e-07041c970642-pvc-11ec5ef3-696c-483c-b90d-92cc296733a8" (OuterVolumeSpecName: "mypd") pod "80abb961-e8fb-4b10-888e-07041c970642" (UID: "80abb961-e8fb-4b10-888e-07041c970642"). InnerVolumeSpecName "pvc-11ec5ef3-696c-483c-b90d-92cc296733a8". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.676736 10903 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8qnv\" (UniqueName: \"kubernetes.io/projected/80abb961-e8fb-4b10-888e-07041c970642-kube-api-access-t8qnv\") pod \"80abb961-e8fb-4b10-888e-07041c970642\" (UID: \"80abb961-e8fb-4b10-888e-07041c970642\") "
Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.676993 10903 reconciler.go:312] "Volume detached for volume \"pvc-11ec5ef3-696c-483c-b90d-92cc296733a8\" (UniqueName: \"kubernetes.io/host-path/80abb961-e8fb-4b10-888e-07041c970642-pvc-11ec5ef3-696c-483c-b90d-92cc296733a8\") on node \"functional-20220629181245-2408\" DevicePath \"\""
Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.680827 10903 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80abb961-e8fb-4b10-888e-07041c970642-kube-api-access-t8qnv" (OuterVolumeSpecName: "kube-api-access-t8qnv") pod "80abb961-e8fb-4b10-888e-07041c970642" (UID: "80abb961-e8fb-4b10-888e-07041c970642"). InnerVolumeSpecName "kube-api-access-t8qnv". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.778415 10903 reconciler.go:312] "Volume detached for volume \"kube-api-access-t8qnv\" (UniqueName: \"kubernetes.io/projected/80abb961-e8fb-4b10-888e-07041c970642-kube-api-access-t8qnv\") on node \"functional-20220629181245-2408\" DevicePath \"\""
Jun 29 18:19:50 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:50.575370 10903 scope.go:110] "RemoveContainer" containerID="8eb2d8391ccfaea6ca7c6f63d9fec05f02003fa857df481a8c4f055337d64693"
Jun 29 18:19:51 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:51.895601 10903 topology_manager.go:200] "Topology Admit Handler"
Jun 29 18:19:51 functional-20220629181245-2408 kubelet[10903]: E0629 18:19:51.895819 10903 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="80abb961-e8fb-4b10-888e-07041c970642" containerName="myfrontend"
Jun 29 18:19:51 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:51.895890 10903 memory_manager.go:345] "RemoveStaleState removing state" podUID="80abb961-e8fb-4b10-888e-07041c970642" containerName="myfrontend"
Jun 29 18:19:52 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:52.079847 10903 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2mn\" (UniqueName: \"kubernetes.io/projected/a43b54bd-e9b2-405a-a1ce-fe921f7596f3-kube-api-access-7z2mn\") pod \"sp-pod\" (UID: \"a43b54bd-e9b2-405a-a1ce-fe921f7596f3\") " pod="default/sp-pod"
Jun 29 18:19:52 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:52.079978 10903 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11ec5ef3-696c-483c-b90d-92cc296733a8\" (UniqueName: \"kubernetes.io/host-path/a43b54bd-e9b2-405a-a1ce-fe921f7596f3-pvc-11ec5ef3-696c-483c-b90d-92cc296733a8\") pod \"sp-pod\" (UID: \"a43b54bd-e9b2-405a-a1ce-fe921f7596f3\") " pod="default/sp-pod"
Jun 29 18:19:52 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:52.878559 10903 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=80abb961-e8fb-4b10-888e-07041c970642 path="/var/lib/kubelet/pods/80abb961-e8fb-4b10-888e-07041c970642/volumes"
Jun 29 18:19:55 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:55.071809 10903 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="19704fb3610ebb1d7f68500b2e535cea07de6bce78eb409dc44bc82eae459972"
Jun 29 18:20:02 functional-20220629181245-2408 kubelet[10903]: I0629 18:20:02.497984 10903 topology_manager.go:200] "Topology Admit Handler"
Jun 29 18:20:02 functional-20220629181245-2408 kubelet[10903]: I0629 18:20:02.675414 10903 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ncwx\" (UniqueName: \"kubernetes.io/projected/5a35bec7-0a31-421a-98b3-2ac8fb3946dc-kube-api-access-4ncwx\") pod \"hello-node-54c4b5c49f-7pm4f\" (UID: \"5a35bec7-0a31-421a-98b3-2ac8fb3946dc\") " pod="default/hello-node-54c4b5c49f-7pm4f"
Jun 29 18:20:03 functional-20220629181245-2408 kubelet[10903]: I0629 18:20:03.871063 10903 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="af2618c2f2906803e51dd58eb89c449934092ee26e5e5273aa984271d1540fb2"
Jun 29 18:21:56 functional-20220629181245-2408 kubelet[10903]: I0629 18:21:56.987076 10903 topology_manager.go:200] "Topology Admit Handler"
Jun 29 18:21:57 functional-20220629181245-2408 kubelet[10903]: I0629 18:21:57.185764 10903 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvxbz\" (UniqueName: \"kubernetes.io/projected/f0829689-622a-4c53-84e4-00e90b285721-kube-api-access-kvxbz\") pod \"mysql-67f7d69d8b-b2279\" (UID: \"f0829689-622a-4c53-84e4-00e90b285721\") " pod="default/mysql-67f7d69d8b-b2279"
Jun 29 18:21:58 functional-20220629181245-2408 kubelet[10903]: I0629 18:21:58.505149 10903 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9fbc6aee81bd15ea5d08a8220b497271df5007196f4d0694c2950cc3cf212360"
Jun 29 18:23:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:23:06.905441 10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 29 18:28:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:28:06.903452 10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 29 18:33:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:33:06.906539 10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 29 18:38:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:38:06.909359 10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 29 18:43:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:43:06.911631 10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 29 18:48:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:48:06.925865 10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
*
* ==> storage-provisioner [22c1c4f4f76f] <==
* I0629 18:18:18.682442 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0629 18:18:18.975411 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0629 18:18:18.975763 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0629 18:18:36.772247 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0629 18:18:36.772547 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a577dbe5-0f30-49b2-a23b-cd68951ef24b", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220629181245-2408_1960d0b0-ba6b-4070-a7d1-a6b35801c0e5 became leader
I0629 18:18:36.772629 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220629181245-2408_1960d0b0-ba6b-4070-a7d1-a6b35801c0e5!
I0629 18:18:36.873595 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220629181245-2408_1960d0b0-ba6b-4070-a7d1-a6b35801c0e5!
I0629 18:19:14.069603 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0629 18:19:14.070027 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 2cbd621b-8004-4a3e-a6c5-a293bc58b2f3 386 0 2022-06-29 18:14:53 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-06-29 18:14:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-11ec5ef3-696c-483c-b90d-92cc296733a8 &PersistentVolumeClaim{ObjectMeta:{myclaim default 11ec5ef3-696c-483c-b90d-92cc296733a8 730 0 2022-06-29 18:19:13 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-06-29 18:19:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-06-29 18:19:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0629 18:19:14.071126 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-11ec5ef3-696c-483c-b90d-92cc296733a8" provisioned
I0629 18:19:14.071294 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0629 18:19:14.071308 1 volume_store.go:212] Trying to save persistentvolume "pvc-11ec5ef3-696c-483c-b90d-92cc296733a8"
I0629 18:19:14.071549 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"11ec5ef3-696c-483c-b90d-92cc296733a8", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0629 18:19:14.185384 1 volume_store.go:219] persistentvolume "pvc-11ec5ef3-696c-483c-b90d-92cc296733a8" saved
I0629 18:19:14.270938 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"11ec5ef3-696c-483c-b90d-92cc296733a8", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-11ec5ef3-696c-483c-b90d-92cc296733a8
*
* ==> storage-provisioner [8f862c41e32a] <==
* I0629 18:17:53.689347 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0629 18:17:53.784672 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220629181245-2408 -n functional-20220629181245-2408
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220629181245-2408 -n functional-20220629181245-2408: (6.7803019s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-20220629181245-2408 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-20220629181245-2408 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220629181245-2408 describe pod : exit status 1 (183.4325ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-20220629181245-2408 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (1987.77s)