Test Report: Docker_Windows 15310

                    
                      af24d50c21096344c09c5fff0b9181d55a181bf0:2022-11-07:26449
                    
                

Test fail (8/277)

Order failed test Duration
81 TestFunctional/parallel/ServiceCmd 2172.86
250 TestPause/serial/PauseAgain 45.12
299 TestNetworkPlugins/group/cilium/Start 583.88
313 TestNetworkPlugins/group/calico/Start 596.39
317 TestStartStop/group/newest-cni/serial/Pause 42.48
327 TestNetworkPlugins/group/false/DNS 353.69
328 TestNetworkPlugins/group/bridge/DNS 356.21
340 TestNetworkPlugins/group/kubenet/HairPin 56.95
x
+
TestFunctional/parallel/ServiceCmd (2172.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-170143 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-170143 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-9t5g8" [d0fb1920-69a5-45d8-b407-9bcb5b0a566c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-9t5g8" [d0fb1920-69a5-45d8-b407-9bcb5b0a566c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 36.1134299s
functional_test.go:1449: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 service list: (1.8753125s)
functional_test.go:1463: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-170143 service --namespace=default --https --url hello-node: exit status 1 (35m25.3913569s)

                                                
                                                
-- stdout --
	https://127.0.0.1:57848

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-170143 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run:  kubectl --context functional-170143 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name:         hello-node-5fcdfb5cc4-9t5g8
Namespace:    default
Priority:     0
Node:         functional-170143/192.168.49.2
Start Time:   Mon, 07 Nov 2022 17:05:39 +0000
Labels:       app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations:  <none>
Status:       Running
IP:           172.17.0.3
IPs:
IP:           172.17.0.3
Controlled By:  ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID:   docker://785b4ed736b85a2190c29751ef74c4f8cc52ffa6072e8349168f3b3be175a1ec
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Mon, 07 Nov 2022 17:06:10 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8l6tm (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-8l6tm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                        Message
----    ------     ----       ----                        -------
Normal  Scheduled  <unknown>                              Successfully assigned default/hello-node-5fcdfb5cc4-9t5g8 to functional-170143
Normal  Pulling    36m        kubelet, functional-170143  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     35m        kubelet, functional-170143  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 28.4153628s
Normal  Created    35m        kubelet, functional-170143  Created container echoserver
Normal  Started    35m        kubelet, functional-170143  Started container echoserver

                                                
                                                
Name:         hello-node-connect-6458c8fb6f-pstw6
Namespace:    default
Priority:     0
Node:         functional-170143/192.168.49.2
Start Time:   Mon, 07 Nov 2022 17:08:20 +0000
Labels:       app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations:  <none>
Status:       Running
IP:           172.17.0.7
IPs:
IP:           172.17.0.7
Controlled By:  ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID:   docker://3cd6c2db5f19184868bd08576732e3f52822925f39ad71b5993a9fa06a22dd77
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Mon, 07 Nov 2022 17:08:24 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jnzwc (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-jnzwc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                        Message
----    ------     ----       ----                        -------
Normal  Scheduled  <unknown>                              Successfully assigned default/hello-node-connect-6458c8fb6f-pstw6 to functional-170143
Normal  Pulled     33m        kubelet, functional-170143  Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal  Created    33m        kubelet, functional-170143  Created container echoserver
Normal  Started    33m        kubelet, functional-170143  Started container echoserver

                                                
                                                
functional_test.go:1412: (dbg) Run:  kubectl --context functional-170143 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run:  kubectl --context functional-170143 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.100.207.192
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31883/TCP
Endpoints:                172.17.0.3:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-170143
helpers_test.go:235: (dbg) docker inspect functional-170143:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515",
	        "Created": "2022-11-07T17:02:22.3086205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:02:23.3498703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515/hostname",
	        "HostsPath": "/var/lib/docker/containers/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515/hosts",
	        "LogPath": "/var/lib/docker/containers/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515/286f6c96dd5661d9d7d942581b5e175b2720f36a9678a851c978d552b61a4515-json.log",
	        "Name": "/functional-170143",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-170143:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-170143",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/037ff98fecd1928a702b7ead72b0f52271bee45c53e0caef80129b972bd0c537-init/diff:/var/lib/docker/overlay2/5ba40928978efc1ee3b35421e2a49e4e2a7d59d61b07bb8e461b5416c8a7cee7/diff:/var/lib/docker/overlay2/67e02326f2fb9638b3c744df240d022783ccecb7d0e13e0d4028b0f8bf17e69d/diff:/var/lib/docker/overlay2/2df41d3bee4190176a765702135566ea66b1390e8b91dfa86b8de2bce135a93a/diff:/var/lib/docker/overlay2/3ec94dbaa89905250e2398ca72e3bb9ff5dccddd8b415085183015f908fee35f/diff:/var/lib/docker/overlay2/3ff2e3a3d014a61bdc0a08d62538ff8c84667c0284decf8ecda52f68283ff0fb/diff:/var/lib/docker/overlay2/bec12fe29cd5fb8e9a7e5bb928cb25b20213dd7883f37ea7dd0a8e3bc0351052/diff:/var/lib/docker/overlay2/21c29267c8a16c82c45149aee257177584b1ce7c75fa787decd6c03a640936f7/diff:/var/lib/docker/overlay2/5552452888ed9ac6a45e539159cccc1e649ef7ad0bc04a4418eebab44d92e666/diff:/var/lib/docker/overlay2/3f5659bfc1d27650ea46807074a281c02900176a5f42ac3ce1101e612aea49a4/diff:/var/lib/docker/overlay2/95ed14
d67ee43712c9773f372551bf224bbcbf05234904cb75bfe650e5a9b431/diff:/var/lib/docker/overlay2/c61bea1335a18e64dabe990546948a49a1e791d643b48037370421d0751659c3/diff:/var/lib/docker/overlay2/4bceff48ae8e97fbcd073948091f9c7dbeadc230b98de67471c5522b9c386672/diff:/var/lib/docker/overlay2/23bacba3c342644af413c4af4dd2d246c778f3794857f6249648a877a053a59c/diff:/var/lib/docker/overlay2/b52423693db548690f91d1cd1a682e7dcffed995839ad13f0c371c2d681d58ae/diff:/var/lib/docker/overlay2/78ed02992e8d5b101283c1328bd5aaa12d7e0ca041f267cc87df49ef21d9bb03/diff:/var/lib/docker/overlay2/46157251f5db6a6570ed965e54b6f9c571885b984df59133027ccf004684e35b/diff:/var/lib/docker/overlay2/a7138fb69aba5dad874e92c39963591ac31b8c00283be1cef1f97bb03e29e95b/diff:/var/lib/docker/overlay2/c758e4b48f926dc6128c8daee3fc24a31cf68d0c856315d42cd496a0dbdd8539/diff:/var/lib/docker/overlay2/177fe0e8ee94dbc81e32cb39d5d299febe5bdcc240161d4b1835668fe03b5209/diff:/var/lib/docker/overlay2/f079d80f0588e1138baa92eb5c6d7f1bd3b748adbba870d85b973e09f3ebf494/diff:/var/lib/d
ocker/overlay2/c3813cada301ad2ba06f263b5ccf3e0b01ae80626c1d9caa7145c8b44f41463e/diff:/var/lib/docker/overlay2/72b362c3acbe525943f481d496d0727bf0f806a59448acd97435a15c292fef7e/diff:/var/lib/docker/overlay2/f3dae2918bbd88ecf6fa92ce58b695b5b7c2da5701725c4de1346a5152bfb602/diff:/var/lib/docker/overlay2/a9aa7189cf37379174133f86b5cd20db821dffd303a69bb90d8b837ef9314cae/diff:/var/lib/docker/overlay2/f2580cf4053e61b8bea5cd979c14376e4cb354a10cabb06928d54c1685d717ad/diff:/var/lib/docker/overlay2/935a0de03d362bfbb94f9caed18a864b47c082fd03de4bfa5ea3296602ab831a/diff:/var/lib/docker/overlay2/3cff685fb531dd4d8712d453d4acd726381268d9ddcd0c57a932182872cbf384/diff:/var/lib/docker/overlay2/112b35fd6eb67f7dfac734ed32e36fb98e01f15bd9c239c2f80d0bf851060ea4/diff:/var/lib/docker/overlay2/01282a02b23965342a99a1d1cc886e98e3cdc825c6ca80b04373c4406c9aa4f3/diff:/var/lib/docker/overlay2/bd54f122cc195ba2f796884b001defe75facaad0c89ccc34a6f6465aaa917fe9/diff:/var/lib/docker/overlay2/20dfd6c01cb2b243e552c3e422dd7b551e0db65fb0c630c438801d475ad
f77a1/diff:/var/lib/docker/overlay2/411ec7d4646f3c8ed6c04c781054e871311645faa7de90212e5c5454192092fd/diff:/var/lib/docker/overlay2/bb233cf9945b014c96c4bcbef2e9ef2f1e040f65910db652eb424af82e93768d/diff:/var/lib/docker/overlay2/a6de3a7d987b965f42f8379040ffd401aad9d38f67ac126754e8d62b555407aa/diff:/var/lib/docker/overlay2/b2ce15147e01c2b1eff488a0aec2cdcf950484589bf948d4b1f3a8a876232d09/diff:/var/lib/docker/overlay2/8a119f66dd46b7cc5f5ba77598b3979bf10ddf84081ea4872ec2ce3375d41684/diff:/var/lib/docker/overlay2/b3c7202a41b63567d929a27b911caefdba403bae7ea5f11b89f717ecb1013955/diff:/var/lib/docker/overlay2/d87eb4edb251e5b57913be1bf6653b8ad0988f5aefaf73d12984c2b91801af17/diff:/var/lib/docker/overlay2/df756f877bb755e1124e9ccaa62bd29d76f04822f12787db45118fcba1de223d/diff:/var/lib/docker/overlay2/ba2334ebb657af4b27997ce445bfc2ce0f740fb6fe3edba5a315042fd325a7d3/diff:/var/lib/docker/overlay2/ba4ef7e8994716049d65e5b49db39352db8c77cd45684b9516c827f4114572cb/diff:/var/lib/docker/overlay2/3df6d706ee5529d758e5ed38fd5b49f5733ae7
45d03cb146ad24eb8be305a2a3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/037ff98fecd1928a702b7ead72b0f52271bee45c53e0caef80129b972bd0c537/merged",
	                "UpperDir": "/var/lib/docker/overlay2/037ff98fecd1928a702b7ead72b0f52271bee45c53e0caef80129b972bd0c537/diff",
	                "WorkDir": "/var/lib/docker/overlay2/037ff98fecd1928a702b7ead72b0f52271bee45c53e0caef80129b972bd0c537/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-170143",
	                "Source": "/var/lib/docker/volumes/functional-170143/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-170143",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-170143",
	                "name.minikube.sigs.k8s.io": "functional-170143",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f528b4b5171d81aba0a127b13b266c0f0c768f036ee89240e540e938981ca50",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57560"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57561"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57563"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57559"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2f528b4b5171",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-170143": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "286f6c96dd56",
	                        "functional-170143"
	                    ],
	                    "NetworkID": "416315494de4a4776bd847db2873960fee12378f7680524d5296a2ef6fd9edc7",
	                    "EndpointID": "05fe04bf5c6ad6bd8b44d7394ed36d9d9ee62dd9a3720799f879052b396bb5a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-170143 -n functional-170143
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-170143 -n functional-170143: (1.9156279s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 logs -n 25: (3.3600363s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| service        | functional-170143 service                                              | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT |                     |
	|                | --namespace=default --https                                            |                   |                   |         |                     |                     |
	|                | --url hello-node                                                       |                   |                   |         |                     |                     |
	| image          | functional-170143 image load --daemon                                  | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-170143               |                   |                   |         |                     |                     |
	| image          | functional-170143 image ls                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
	| image          | functional-170143 image save                                           | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-170143               |                   |                   |         |                     |                     |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	| image          | functional-170143 image rm                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-170143               |                   |                   |         |                     |                     |
	| image          | functional-170143 image ls                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
	| image          | functional-170143 image load                                           | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:06 GMT |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	| image          | functional-170143 image ls                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:06 GMT | 07 Nov 22 17:07 GMT |
	| image          | functional-170143 image save --daemon                                  | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:07 GMT | 07 Nov 22 17:07 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-170143               |                   |                   |         |                     |                     |
	| ssh            | functional-170143 ssh echo                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:07 GMT | 07 Nov 22 17:07 GMT |
	|                | hello                                                                  |                   |                   |         |                     |                     |
	| ssh            | functional-170143 ssh cat                                              | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:07 GMT | 07 Nov 22 17:07 GMT |
	|                | /etc/hostname                                                          |                   |                   |         |                     |                     |
	| dashboard      | --url --port 36195                                                     | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:07 GMT |                     |
	|                | -p functional-170143                                                   |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |                   |         |                     |                     |
	| tunnel         | functional-170143 tunnel                                               | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT |                     |
	|                | --alsologtostderr                                                      |                   |                   |         |                     |                     |
	| addons         | functional-170143 addons list                                          | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	| addons         | functional-170143 addons list                                          | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | -o json                                                                |                   |                   |         |                     |                     |
	| update-context | functional-170143                                                      | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | update-context                                                         |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |                   |         |                     |                     |
	| update-context | functional-170143                                                      | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | update-context                                                         |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |                   |         |                     |                     |
	| update-context | functional-170143                                                      | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | update-context                                                         |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |                   |         |                     |                     |
	| image          | functional-170143 image ls                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | --format short                                                         |                   |                   |         |                     |                     |
	| image          | functional-170143 image ls                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | --format yaml                                                          |                   |                   |         |                     |                     |
	| ssh            | functional-170143 ssh pgrep                                            | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT |                     |
	|                | buildkitd                                                              |                   |                   |         |                     |                     |
	| image          | functional-170143 image ls                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | --format json                                                          |                   |                   |         |                     |                     |
	| image          | functional-170143 image build -t                                       | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | localhost/my-image:functional-170143                                   |                   |                   |         |                     |                     |
	|                | testdata\build                                                         |                   |                   |         |                     |                     |
	| image          | functional-170143 image ls                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|                | --format table                                                         |                   |                   |         |                     |                     |
	| image          | functional-170143 image ls                                             | functional-170143 | minikube2\jenkins | v1.28.0 | 07 Nov 22 17:08 GMT | 07 Nov 22 17:08 GMT |
	|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 17:06:12
	Running on machine: minikube2
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 17:06:12.828045    7932 out.go:296] Setting OutFile to fd 964 ...
	I1107 17:06:12.932233    7932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:06:12.932233    7932 out.go:309] Setting ErrFile to fd 968...
	I1107 17:06:12.932233    7932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:06:12.955241    7932 out.go:303] Setting JSON to false
	I1107 17:06:12.958239    7932 start.go:116] hostinfo: {"hostname":"minikube2","uptime":5410,"bootTime":1667835362,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 17:06:12.958239    7932 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 17:06:12.962253    7932 out.go:177] * [functional-170143] minikube v1.28.0 sur Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 17:06:12.966244    7932 notify.go:220] Checking for updates...
	I1107 17:06:12.968243    7932 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 17:06:12.970258    7932 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 17:06:12.973275    7932 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:06:12.976233    7932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:06:12.979239    7932 config.go:180] Loaded profile config "functional-170143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:06:12.980244    7932 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:06:13.331754    7932 docker.go:137] docker version: linux-20.10.20
	I1107 17:06:13.345743    7932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:06:14.069739    7932 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-11-07 17:06:13.5155383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:06:14.075741    7932 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 17:06:14.077752    7932 start.go:282] selected driver: docker
	I1107 17:06:14.077752    7932 start.go:808] validating driver "docker" against &{Name:functional-170143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-170143 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:06:14.077752    7932 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:06:14.157382    7932 out.go:177] 
	W1107 17:06:14.160415    7932 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 17:06:14.165378    7932 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 17:02:24 UTC, end at Mon 2022-11-07 17:41:48 UTC. --
	Nov 07 17:04:53 functional-170143 dockerd[8121]: time="2022-11-07T17:04:53.547706300Z" level=info msg="Loading containers: start."
	Nov 07 17:04:53 functional-170143 dockerd[8121]: time="2022-11-07T17:04:53.995554500Z" level=info msg="ignoring event" container=89de05a3b74d38a2ff938c03814c6fdf6722cd6fa17a02770be1e5e2a2611b3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.355352000Z" level=info msg="Removing stale sandbox 22a3d72c91eaf04eecd915d47dd0ae37c0cc184fc27957d1c4809114b87269e8 (89de05a3b74d38a2ff938c03814c6fdf6722cd6fa17a02770be1e5e2a2611b3b)"
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.366202700Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8ac4136d2bc36547ef6890b6aae764b53ef77d70fb9f04e8d7a141ba8e9457bf 9db520486c82bb5beecd69889569e2b05cc4520dc7901e467e603bcfbd694ecc], retrying...."
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.545914900Z" level=info msg="Removing stale sandbox 57d62f70e24e22cc4fb0892e05f2f14ac6dd2dd2dbc2b07db5295a4e970ae3fe (9afebb5975cea7c01d374265f9a95b92e4dea3431eef9819afcf62fd151aa235)"
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.556669200Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8ac4136d2bc36547ef6890b6aae764b53ef77d70fb9f04e8d7a141ba8e9457bf 232fe3681e82bf3610e01d9cd14b90507af0012ff7855fd57e53e524faeffb5f], retrying...."
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.648502800Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.780079200Z" level=info msg="Loading containers: done."
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.845578500Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.845746600Z" level=info msg="Daemon has completed initialization"
	Nov 07 17:04:54 functional-170143 systemd[1]: Started Docker Application Container Engine.
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.940435600Z" level=info msg="API listen on [::]:2376"
	Nov 07 17:04:54 functional-170143 dockerd[8121]: time="2022-11-07T17:04:54.950115400Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.786467000Z" level=info msg="ignoring event" container=6e761f8447a3bcc89ceb2dd9090c9c4e42d57accb71c0f3c50067986670de7c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.975875500Z" level=info msg="ignoring event" container=9259c007b570a8dd11c61a4c99b15df8b1cd1c836624c55de5fefdb65f57754d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.976240300Z" level=info msg="ignoring event" container=b990ee9fa61f66ef72e67668a397591995ff41184f0e78a87615265026764204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.976449900Z" level=info msg="ignoring event" container=1aa7c0b88e6c8bb06efc89ffa49afa51d9ba4de48d74b0ddecc22f8d4ceb7288 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:05:00 functional-170143 dockerd[8121]: time="2022-11-07T17:05:00.978909800Z" level=info msg="ignoring event" container=c11858a245ccea0dd37dddb3f929b1cd3d74c01bad805a2bedaf17cd13a89e2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:05:01 functional-170143 dockerd[8121]: time="2022-11-07T17:05:01.074881600Z" level=info msg="ignoring event" container=5091149bb59264b580b37f0b0a4f5ad0f4d3ad9add76e8b5df4ea577a1ecb689 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:05:02 functional-170143 dockerd[8121]: time="2022-11-07T17:05:02.728996100Z" level=info msg="ignoring event" container=070256b26e4ea3128744a749d4e545c0a30c7eb6ae633b5d5c3897a1815c84e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:05:17 functional-170143 dockerd[8121]: time="2022-11-07T17:05:17.797287900Z" level=info msg="ignoring event" container=602e34b94dd17e190e2774ee0caa46fae8fccf76152c627d8bd7b35c3dbddf36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:08:05 functional-170143 dockerd[8121]: time="2022-11-07T17:08:05.055533700Z" level=info msg="ignoring event" container=1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:08:05 functional-170143 dockerd[8121]: time="2022-11-07T17:08:05.211499000Z" level=info msg="ignoring event" container=61c0075177518d812865641a29d03b4ca5b0d19409b37586778cf4c3b867c828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:08:42 functional-170143 dockerd[8121]: time="2022-11-07T17:08:42.438894600Z" level=info msg="ignoring event" container=b48f34b1c799d586061268710b4eff4e674a8de872823a3302d9e06f33097a3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:08:43 functional-170143 dockerd[8121]: time="2022-11-07T17:08:43.083755500Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	2fcdc268beef1       nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3                   33 minutes ago      Running             nginx                     0                   836e27720f791
	3cd6c2db5f191       82e4c8a736a4f                                                                                   33 minutes ago      Running             echoserver                0                   af36bfdb4c07a
	765ca37369cd7       nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f                   33 minutes ago      Running             myfrontend                0                   82afd834e711f
	e5af10ec33df3       mysql@sha256:0e3435e72c493aec752d8274379b1eac4d634f47a7781a7a92b8636fa1dc94c1                   34 minutes ago      Running             mysql                     0                   5a8450d94fd7e
	785b4ed736b85       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   35 minutes ago      Running             echoserver                0                   9f817759ea085
	ed1bd51d97794       5185b96f0becf                                                                                   36 minutes ago      Running             coredns                   3                   f1c4fed1ed70d
	0300249d11be5       6e38f40d628db                                                                                   36 minutes ago      Running             storage-provisioner       3                   d277e588fde7f
	d333e1a26cba3       beaaf00edd38a                                                                                   36 minutes ago      Running             kube-proxy                3                   81603c538e45d
	afb78fb3244ca       0346dbd74bcb9                                                                                   36 minutes ago      Running             kube-apiserver            0                   7e2423dff2eff
	c93c8ccb82ea9       6d23ec0e8b87e                                                                                   36 minutes ago      Running             kube-scheduler            3                   72b6a9854c8af
	28821c1a02306       a8a176a5d5d69                                                                                   36 minutes ago      Running             etcd                      3                   7d06efd8f3096
	b7062e63f12a6       6039992312758                                                                                   36 minutes ago      Running             kube-controller-manager   3                   c0ddd34f7d6d8
	2ac37c176ed7f       6e38f40d628db                                                                                   37 minutes ago      Exited              storage-provisioner       2                   c3924db868421
	01399ac93dbc8       6039992312758                                                                                   37 minutes ago      Exited              kube-controller-manager   2                   07e597b2fc291
	16ede80c27553       5185b96f0becf                                                                                   37 minutes ago      Exited              coredns                   2                   e160201bab365
	ea4cf65607784       6d23ec0e8b87e                                                                                   37 minutes ago      Exited              kube-scheduler            2                   85b79210c2550
	b3b6091afc11b       beaaf00edd38a                                                                                   37 minutes ago      Exited              kube-proxy                2                   259799cb2778d
	ae63504cf46ee       a8a176a5d5d69                                                                                   37 minutes ago      Exited              etcd                      2                   9a58d0f050c52
	
	* 
	* ==> coredns [16ede80c2755] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [ed1bd51d9779] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               functional-170143
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-170143
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262
	                    minikube.k8s.io/name=functional-170143
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_11_07T17_03_00_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Nov 2022 17:02:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-170143
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Nov 2022 17:41:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Nov 2022 17:39:53 +0000   Mon, 07 Nov 2022 17:02:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Nov 2022 17:39:53 +0000   Mon, 07 Nov 2022 17:02:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Nov 2022 17:39:53 +0000   Mon, 07 Nov 2022 17:02:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Nov 2022 17:39:53 +0000   Mon, 07 Nov 2022 17:03:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-170143
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                996614ec4c814b87b7ec8ebee3d0e8c9
	  Boot ID:                    5d9b34fc-681b-4fde-9fda-bd2b0089dce3
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5fcdfb5cc4-9t5g8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36m
	  default                     hello-node-connect-6458c8fb6f-pstw6          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33m
	  default                     mysql-596b7fcdbf-f99rm                       600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     35m
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33m
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33m
	  kube-system                 coredns-565d847f94-gd62f                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-170143                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-170143             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36m
	  kube-system                 kube-controller-manager-functional-170143    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-phtqg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-170143             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38m                kube-proxy       
	  Normal  Starting                 36m                kube-proxy       
	  Normal  Starting                 37m                kube-proxy       
	  Normal  NodeHasSufficientMemory  39m (x7 over 39m)  kubelet          Node functional-170143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m (x7 over 39m)  kubelet          Node functional-170143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m (x6 over 39m)  kubelet          Node functional-170143 status is now: NodeHasSufficientPID
	  Normal  Starting                 38m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38m                kubelet          Node functional-170143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38m                kubelet          Node functional-170143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet          Node functional-170143 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           38m                node-controller  Node functional-170143 event: Registered Node functional-170143 in Controller
	  Normal  NodeReady                38m                kubelet          Node functional-170143 status is now: NodeReady
	  Normal  RegisteredNode           37m                node-controller  Node functional-170143 event: Registered Node functional-170143 in Controller
	  Normal  Starting                 36m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  36m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  36m (x8 over 36m)  kubelet          Node functional-170143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36m (x8 over 36m)  kubelet          Node functional-170143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36m (x7 over 36m)  kubelet          Node functional-170143 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36m                node-controller  Node functional-170143 event: Registered Node functional-170143 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 7 17:16] WSL2: Performing memory compaction.
	[Nov 7 17:17] WSL2: Performing memory compaction.
	[Nov 7 17:18] WSL2: Performing memory compaction.
	[Nov 7 17:19] WSL2: Performing memory compaction.
	[Nov 7 17:20] WSL2: Performing memory compaction.
	[Nov 7 17:21] WSL2: Performing memory compaction.
	[Nov 7 17:22] WSL2: Performing memory compaction.
	[Nov 7 17:23] WSL2: Performing memory compaction.
	[Nov 7 17:24] WSL2: Performing memory compaction.
	[Nov 7 17:25] WSL2: Performing memory compaction.
	[Nov 7 17:26] WSL2: Performing memory compaction.
	[Nov 7 17:27] WSL2: Performing memory compaction.
	[Nov 7 17:28] WSL2: Performing memory compaction.
	[Nov 7 17:29] WSL2: Performing memory compaction.
	[Nov 7 17:30] WSL2: Performing memory compaction.
	[Nov 7 17:31] WSL2: Performing memory compaction.
	[Nov 7 17:32] WSL2: Performing memory compaction.
	[Nov 7 17:33] WSL2: Performing memory compaction.
	[Nov 7 17:34] WSL2: Performing memory compaction.
	[Nov 7 17:35] WSL2: Performing memory compaction.
	[Nov 7 17:36] WSL2: Performing memory compaction.
	[Nov 7 17:37] WSL2: Performing memory compaction.
	[Nov 7 17:39] WSL2: Performing memory compaction.
	[Nov 7 17:40] WSL2: Performing memory compaction.
	[Nov 7 17:41] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [28821c1a0230] <==
	* {"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"999.1652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8262"}
	{"level":"info","ts":"2022-11-07T17:07:58.111Z","caller":"traceutil/trace.go:171","msg":"trace[1416244392] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:801; }","duration":"999.2252ms","start":"2022-11-07T17:07:57.112Z","end":"2022-11-07T17:07:58.111Z","steps":["trace[1416244392] 'range keys from in-memory index tree'  (duration: 999.0124ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T17:07:58.111Z","caller":"traceutil/trace.go:171","msg":"trace[135707562] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:801; }","duration":"372.2708ms","start":"2022-11-07T17:07:57.739Z","end":"2022-11-07T17:07:58.111Z","steps":["trace[135707562] 'range keys from in-memory index tree'  (duration: 371.8672ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T17:07:57.739Z","time spent":"372.4093ms","remote":"127.0.0.1:57340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T17:07:57.112Z","time spent":"999.3325ms","remote":"127.0.0.1:57314","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":8286,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"695.4235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-11-07T17:07:58.112Z","caller":"traceutil/trace.go:171","msg":"trace[1956856599] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:801; }","duration":"695.976ms","start":"2022-11-07T17:07:57.416Z","end":"2022-11-07T17:07:58.112Z","steps":["trace[1956856599] 'count revisions from in-memory index tree'  (duration: 695.1103ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T17:07:58.112Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T17:07:57.415Z","time spent":"696.1507ms","remote":"127.0.0.1:57346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":4,"response size":31,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	{"level":"warn","ts":"2022-11-07T17:07:58.111Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"266.0972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-11-07T17:07:58.112Z","caller":"traceutil/trace.go:171","msg":"trace[2109282954] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:801; }","duration":"267.0168ms","start":"2022-11-07T17:07:57.845Z","end":"2022-11-07T17:07:58.112Z","steps":["trace[2109282954] 'count revisions from in-memory index tree'  (duration: 265.9218ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T17:15:12.432Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2022-11-07T17:15:12.434Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":959,"took":"1.3686ms"}
	{"level":"info","ts":"2022-11-07T17:20:12.448Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1170}
	{"level":"info","ts":"2022-11-07T17:20:12.449Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1170,"took":"567.8µs"}
	{"level":"info","ts":"2022-11-07T17:22:04.280Z","caller":"traceutil/trace.go:171","msg":"trace[111581127] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1458; }","duration":"100.006ms","start":"2022-11-07T17:22:04.180Z","end":"2022-11-07T17:22:04.280Z","steps":["trace[111581127] 'count revisions from in-memory index tree'  (duration: 96.1622ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T17:24:11.289Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.5219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2022-11-07T17:24:11.289Z","caller":"traceutil/trace.go:171","msg":"trace[1482763664] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:1546; }","duration":"102.7202ms","start":"2022-11-07T17:24:11.186Z","end":"2022-11-07T17:24:11.289Z","steps":["trace[1482763664] 'range keys from in-memory index tree'  (duration: 102.0781ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T17:25:12.476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1380}
	{"level":"info","ts":"2022-11-07T17:25:12.477Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1380,"took":"664.2µs"}
	{"level":"info","ts":"2022-11-07T17:30:12.496Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1591}
	{"level":"info","ts":"2022-11-07T17:30:12.498Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1591,"took":"642.8µs"}
	{"level":"info","ts":"2022-11-07T17:35:12.513Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1801}
	{"level":"info","ts":"2022-11-07T17:35:12.514Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1801,"took":"612.2µs"}
	{"level":"info","ts":"2022-11-07T17:40:12.532Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2011}
	{"level":"info","ts":"2022-11-07T17:40:12.533Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2011,"took":"655.5µs"}
	
	* 
	* ==> etcd [ae63504cf46e] <==
	* {"level":"info","ts":"2022-11-07T17:03:55.088Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-07T17:03:55.091Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-11-07T17:03:55.094Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-11-07T17:04:03.383Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"209.1166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-170143\" ","response":"range_response_count:1 size:4573"}
	{"level":"info","ts":"2022-11-07T17:04:03.383Z","caller":"traceutil/trace.go:171","msg":"trace[716453105] range","detail":"{range_begin:/registry/minions/functional-170143; range_end:; response_count:1; response_revision:418; }","duration":"209.3561ms","start":"2022-11-07T17:04:03.174Z","end":"2022-11-07T17:04:03.383Z","steps":["trace[716453105] 'agreement among raft nodes before linearized reading'  (duration: 196.7318ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T17:04:03.384Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.0579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T17:04:03.384Z","caller":"traceutil/trace.go:171","msg":"trace[286629770] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:419; }","duration":"111.1276ms","start":"2022-11-07T17:04:03.272Z","end":"2022-11-07T17:04:03.384Z","steps":["trace[286629770] 'agreement among raft nodes before linearized reading'  (duration: 110.9985ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T17:04:03.384Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.1724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T17:04:03.384Z","caller":"traceutil/trace.go:171","msg":"trace[434933487] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:419; }","duration":"103.3641ms","start":"2022-11-07T17:04:03.280Z","end":"2022-11-07T17:04:03.384Z","steps":["trace[434933487] 'agreement among raft nodes before linearized reading'  (duration: 103.1491ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T17:04:03.384Z","caller":"traceutil/trace.go:171","msg":"trace[344659348] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"111.4028ms","start":"2022-11-07T17:04:03.272Z","end":"2022-11-07T17:04:03.384Z","steps":["trace[344659348] 'process raft request'  (duration: 98.3363ms)","trace[344659348] 'compare'  (duration: 12.1396ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T17:04:03.384Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.0298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-functional-170143\" ","response":"range_response_count:1 size:5204"}
	{"level":"info","ts":"2022-11-07T17:04:03.384Z","caller":"traceutil/trace.go:171","msg":"trace[1930719174] range","detail":"{range_begin:/registry/pods/kube-system/etcd-functional-170143; range_end:; response_count:1; response_revision:419; }","duration":"107.1822ms","start":"2022-11-07T17:04:03.277Z","end":"2022-11-07T17:04:03.384Z","steps":["trace[1930719174] 'agreement among raft nodes before linearized reading'  (duration: 106.8497ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T17:04:03.698Z","caller":"traceutil/trace.go:171","msg":"trace[497007862] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"112.7291ms","start":"2022-11-07T17:04:03.585Z","end":"2022-11-07T17:04:03.698Z","steps":["trace[497007862] 'process raft request'  (duration: 84.7532ms)","trace[497007862] 'compare'  (duration: 27.6303ms)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T17:04:03.699Z","caller":"traceutil/trace.go:171","msg":"trace[1458494014] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"110.3974ms","start":"2022-11-07T17:04:03.589Z","end":"2022-11-07T17:04:03.699Z","steps":["trace[1458494014] 'process raft request'  (duration: 109.4941ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T17:04:03.698Z","caller":"traceutil/trace.go:171","msg":"trace[1978917583] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"104.9236ms","start":"2022-11-07T17:04:03.593Z","end":"2022-11-07T17:04:03.698Z","steps":["trace[1978917583] 'process raft request'  (duration: 104.7377ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T17:04:03.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.4195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T17:04:03.700Z","caller":"traceutil/trace.go:171","msg":"trace[1974693587] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:426; }","duration":"122.5152ms","start":"2022-11-07T17:04:03.578Z","end":"2022-11-07T17:04:03.700Z","steps":["trace[1974693587] 'agreement among raft nodes before linearized reading'  (duration: 92.6023ms)","trace[1974693587] 'range keys from in-memory index tree'  (duration: 27.7916ms)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T17:04:47.971Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-07T17:04:47.971Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-170143","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/11/07 17:04:47 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/11/07 17:04:48 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-07T17:04:48.185Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-11-07T17:04:48.375Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-11-07T17:04:48.377Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-11-07T17:04:48.377Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-170143","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  17:41:49 up 56 min,  0 users,  load average: 0.50, 0.67, 0.78
	Linux functional-170143 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [afb78fb3244c] <==
	* Trace[1432501637]: ---"Listing from storage done" 779ms (17:07:06.884)
	Trace[1432501637]: [780.8709ms] [780.8709ms] END
	I1107 17:07:06.885822       1 trace.go:205] Trace[1374766163]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:5c9854e3-cf26-4023-95fe-bd2ee388c743,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:06.104) (total time: 781ms):
	Trace[1374766163]: ---"Listing from storage done" 780ms (17:07:06.884)
	Trace[1374766163]: [781.1142ms] [781.1142ms] END
	I1107 17:07:29.530796       1 trace.go:205] Trace[1347303991]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:4d6f392c-9162-4ba5-9e90-2d2efb2d0411,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:28.279) (total time: 1251ms):
	Trace[1347303991]: ---"About to write a response" 1250ms (17:07:29.530)
	Trace[1347303991]: [1.2511385s] [1.2511385s] END
	I1107 17:07:29.530820       1 trace.go:205] Trace[402617218]: "List(recursive=true) etcd3" audit-id:f233fc42-5117-40b5-b9f9-b9e465c97a08,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Nov-2022 17:07:28.105) (total time: 1425ms):
	Trace[402617218]: [1.4252376s] [1.4252376s] END
	I1107 17:07:29.531458       1 trace.go:205] Trace[672415117]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:f233fc42-5117-40b5-b9f9-b9e465c97a08,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:28.105) (total time: 1425ms):
	Trace[672415117]: ---"Listing from storage done" 1425ms (17:07:29.530)
	Trace[672415117]: [1.4259126s] [1.4259126s] END
	I1107 17:07:29.532004       1 trace.go:205] Trace[421264066]: "List(recursive=true) etcd3" audit-id:4fcd7a0b-9536-4f38-872b-02fded3f4752,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Nov-2022 17:07:28.105) (total time: 1425ms):
	Trace[421264066]: [1.4259525s] [1.4259525s] END
	I1107 17:07:29.532883       1 trace.go:205] Trace[68099947]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:4fcd7a0b-9536-4f38-872b-02fded3f4752,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:28.105) (total time: 1426ms):
	Trace[68099947]: ---"Listing from storage done" 1426ms (17:07:29.532)
	Trace[68099947]: [1.4268803s] [1.4268803s] END
	I1107 17:07:58.113128       1 trace.go:205] Trace[401185032]: "List(recursive=true) etcd3" audit-id:0af106e6-ed5e-486b-bfb1-a245046aeb2a,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Nov-2022 17:07:57.111) (total time: 1001ms):
	Trace[401185032]: [1.0017483s] [1.0017483s] END
	I1107 17:07:58.113991       1 trace.go:205] Trace[782144908]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:0af106e6-ed5e-486b-bfb1-a245046aeb2a,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (07-Nov-2022 17:07:57.111) (total time: 1002ms):
	Trace[782144908]: ---"Listing from storage done" 1001ms (17:07:58.113)
	Trace[782144908]: [1.0026453s] [1.0026453s] END
	I1107 17:08:17.604879       1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.111.201.194]
	I1107 17:08:21.156397       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.102.223.227]
	
	* 
	* ==> kube-controller-manager [01399ac93dbc] <==
	* I1107 17:04:15.970873       1 shared_informer.go:262] Caches are synced for HPA
	I1107 17:04:15.970929       1 shared_informer.go:262] Caches are synced for PVC protection
	I1107 17:04:15.972170       1 shared_informer.go:262] Caches are synced for expand
	I1107 17:04:15.972286       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1107 17:04:15.972189       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1107 17:04:15.972287       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1107 17:04:15.972217       1 shared_informer.go:262] Caches are synced for endpoint
	I1107 17:04:15.972191       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1107 17:04:15.972254       1 shared_informer.go:262] Caches are synced for cronjob
	I1107 17:04:15.972277       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1107 17:04:15.972276       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1107 17:04:15.972240       1 shared_informer.go:262] Caches are synced for disruption
	I1107 17:04:15.973029       1 shared_informer.go:262] Caches are synced for ephemeral
	I1107 17:04:15.975122       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1107 17:04:15.978251       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1107 17:04:15.979384       1 shared_informer.go:262] Caches are synced for daemon sets
	I1107 17:04:15.981994       1 shared_informer.go:262] Caches are synced for deployment
	I1107 17:04:15.989398       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1107 17:04:15.995107       1 shared_informer.go:262] Caches are synced for stateful set
	I1107 17:04:15.998905       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:04:16.004545       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1107 17:04:16.071854       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:04:16.375298       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:04:16.375389       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 17:04:16.379453       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [b7062e63f12a] <==
	* I1107 17:05:32.775277       1 shared_informer.go:262] Caches are synced for daemon sets
	I1107 17:05:32.775624       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1107 17:05:32.775928       1 shared_informer.go:262] Caches are synced for taint
	I1107 17:05:32.776460       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1107 17:05:32.776507       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1107 17:05:32.776679       1 event.go:294] "Event occurred" object="functional-170143" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-170143 event: Registered Node functional-170143 in Controller"
	W1107 17:05:32.776728       1 node_lifecycle_controller.go:1058] Missing timestamp for Node functional-170143. Assuming now as a timestamp.
	I1107 17:05:32.776740       1 taint_manager.go:209] "Sending events to api server"
	I1107 17:05:32.776794       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1107 17:05:32.777253       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1107 17:05:32.779608       1 shared_informer.go:262] Caches are synced for PVC protection
	I1107 17:05:32.875846       1 shared_informer.go:262] Caches are synced for attach detach
	I1107 17:05:32.879751       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:05:32.885620       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:05:33.197215       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:05:33.266122       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:05:33.266263       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 17:05:39.674393       1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
	I1107 17:05:39.719806       1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-9t5g8"
	I1107 17:06:01.084587       1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
	I1107 17:06:01.174875       1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-f99rm"
	I1107 17:06:20.187447       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I1107 17:06:20.187609       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I1107 17:08:20.878483       1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
	I1107 17:08:20.905232       1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-pstw6"
	
	* 
	* ==> kube-proxy [b3b6091afc11] <==
	* I1107 17:03:53.787423       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 17:03:53.873678       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 17:03:53.877208       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 17:03:53.884338       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E1107 17:03:53.889731       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-170143": dial tcp 192.168.49.2:8441: connect: connection refused
	I1107 17:04:03.387587       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I1107 17:04:03.387840       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I1107 17:04:03.388312       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 17:04:03.570972       1 server_others.go:206] "Using iptables Proxier"
	I1107 17:04:03.571143       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 17:04:03.571167       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 17:04:03.571192       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 17:04:03.571243       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:04:03.571820       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:04:03.572465       1 server.go:661] "Version info" version="v1.25.3"
	I1107 17:04:03.572587       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:04:03.573773       1 config.go:226] "Starting endpoint slice config controller"
	I1107 17:04:03.573909       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 17:04:03.574041       1 config.go:317] "Starting service config controller"
	I1107 17:04:03.574066       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 17:04:03.577513       1 config.go:444] "Starting node config controller"
	I1107 17:04:03.577759       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 17:04:03.674979       1 shared_informer.go:262] Caches are synced for service config
	I1107 17:04:03.675100       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 17:04:03.677946       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [d333e1a26cba] <==
	* I1107 17:05:19.473257       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1107 17:05:19.477234       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 17:05:19.481134       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 17:05:19.485238       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 17:05:19.488401       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1107 17:05:19.673666       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I1107 17:05:19.673733       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I1107 17:05:19.673805       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 17:05:19.975603       1 server_others.go:206] "Using iptables Proxier"
	I1107 17:05:19.975723       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 17:05:19.975738       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 17:05:19.975757       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 17:05:19.975783       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:05:19.976445       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:05:19.976947       1 server.go:661] "Version info" version="v1.25.3"
	I1107 17:05:19.977075       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:05:19.977958       1 config.go:317] "Starting service config controller"
	I1107 17:05:19.978109       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 17:05:19.979522       1 config.go:444] "Starting node config controller"
	I1107 17:05:19.979615       1 config.go:226] "Starting endpoint slice config controller"
	I1107 17:05:19.979707       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 17:05:19.979718       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 17:05:20.078373       1 shared_informer.go:262] Caches are synced for service config
	I1107 17:05:20.080004       1 shared_informer.go:262] Caches are synced for node config
	I1107 17:05:20.080229       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c93c8ccb82ea] <==
	* I1107 17:05:11.320852       1 serving.go:348] Generated self-signed cert in-memory
	W1107 17:05:16.571935       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 17:05:16.571983       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 17:05:16.572007       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 17:05:16.572023       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 17:05:16.682584       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1107 17:05:16.682644       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:05:16.685249       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 17:05:16.685433       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:05:16.685460       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 17:05:16.685598       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 17:05:16.786618       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ea4cf6560778] <==
	* I1107 17:03:55.575272       1 serving.go:348] Generated self-signed cert in-memory
	W1107 17:04:03.076340       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 17:04:03.076394       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1107 17:04:03.076422       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 17:04:03.076439       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 17:04:03.188275       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1107 17:04:03.188385       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:04:03.190894       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 17:04:03.191003       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 17:04:03.190926       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 17:04:03.192135       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:04:03.292762       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:04:47.880210       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1107 17:04:47.880502       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1107 17:04:47.880524       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I1107 17:04:47.880607       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E1107 17:04:47.880721       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:02:24 UTC, end at Mon 2022-11-07 17:41:49 UTC. --
	Nov 07 17:08:05 functional-170143 kubelet[9654]: I1107 17:08:05.866894    9654 reconciler.go:399] "Volume detached for volume \"kube-api-access-x4vtv\" (UniqueName: \"kubernetes.io/projected/121c7dc7-8244-410e-a584-a3e68b338d43-kube-api-access-x4vtv\") on node \"functional-170143\" DevicePath \"\""
	Nov 07 17:08:05 functional-170143 kubelet[9654]: I1107 17:08:05.867060    9654 reconciler.go:399] "Volume detached for volume \"pvc-285ccea9-bf55-480d-a198-16b12f688a34\" (UniqueName: \"kubernetes.io/host-path/121c7dc7-8244-410e-a584-a3e68b338d43-pvc-285ccea9-bf55-480d-a198-16b12f688a34\") on node \"functional-170143\" DevicePath \"\""
	Nov 07 17:08:05 functional-170143 kubelet[9654]: I1107 17:08:05.938730    9654 scope.go:115] "RemoveContainer" containerID="1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81"
	Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.077112    9654 scope.go:115] "RemoveContainer" containerID="1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81"
	Nov 07 17:08:06 functional-170143 kubelet[9654]: E1107 17:08:06.082894    9654 remote_runtime.go:599] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81" containerID="1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81"
	Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.083284    9654 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81} err="failed to get container status \"1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81\": rpc error: code = Unknown desc = Error: No such container: 1fe32c7d05e13a0aaffee67e588e3cc96000d4e1d009c29835db307476c28f81"
	Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.493239    9654 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:08:06 functional-170143 kubelet[9654]: E1107 17:08:06.493365    9654 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="121c7dc7-8244-410e-a584-a3e68b338d43" containerName="myfrontend"
	Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.493413    9654 memory_manager.go:345] "RemoveStaleState removing state" podUID="121c7dc7-8244-410e-a584-a3e68b338d43" containerName="myfrontend"
	Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.681401    9654 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-285ccea9-bf55-480d-a198-16b12f688a34\" (UniqueName: \"kubernetes.io/host-path/f05cebe7-e0b0-4e41-b10d-2b5757c91d06-pvc-285ccea9-bf55-480d-a198-16b12f688a34\") pod \"sp-pod\" (UID: \"f05cebe7-e0b0-4e41-b10d-2b5757c91d06\") " pod="default/sp-pod"
	Nov 07 17:08:06 functional-170143 kubelet[9654]: I1107 17:08:06.681622    9654 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sslhp\" (UniqueName: \"kubernetes.io/projected/f05cebe7-e0b0-4e41-b10d-2b5757c91d06-kube-api-access-sslhp\") pod \"sp-pod\" (UID: \"f05cebe7-e0b0-4e41-b10d-2b5757c91d06\") " pod="default/sp-pod"
	Nov 07 17:08:07 functional-170143 kubelet[9654]: I1107 17:08:07.801180    9654 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=121c7dc7-8244-410e-a584-a3e68b338d43 path="/var/lib/kubelet/pods/121c7dc7-8244-410e-a584-a3e68b338d43/volumes"
	Nov 07 17:08:17 functional-170143 kubelet[9654]: I1107 17:08:17.558714    9654 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:08:17 functional-170143 kubelet[9654]: I1107 17:08:17.675186    9654 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4txj\" (UniqueName: \"kubernetes.io/projected/fbeae54f-433a-4ae5-a55e-f8bd1d679533-kube-api-access-f4txj\") pod \"nginx-svc\" (UID: \"fbeae54f-433a-4ae5-a55e-f8bd1d679533\") " pod="default/nginx-svc"
	Nov 07 17:08:19 functional-170143 kubelet[9654]: I1107 17:08:19.182548    9654 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="836e27720f7919fc04451220be5e197c4cc978a57272c0a5805d9fc304f23bac"
	Nov 07 17:08:20 functional-170143 kubelet[9654]: I1107 17:08:20.915784    9654 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:08:21 functional-170143 kubelet[9654]: I1107 17:08:21.081700    9654 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzwc\" (UniqueName: \"kubernetes.io/projected/3fbcc286-7635-4244-8d99-7d79df3dd4c8-kube-api-access-jnzwc\") pod \"hello-node-connect-6458c8fb6f-pstw6\" (UID: \"3fbcc286-7635-4244-8d99-7d79df3dd4c8\") " pod="default/hello-node-connect-6458c8fb6f-pstw6"
	Nov 07 17:08:22 functional-170143 kubelet[9654]: I1107 17:08:22.976386    9654 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="af36bfdb4c07a363460882876e1a61fd9659cb4337503ac32012f09e077a7574"
	Nov 07 17:10:07 functional-170143 kubelet[9654]: W1107 17:10:07.899840    9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Nov 07 17:15:07 functional-170143 kubelet[9654]: W1107 17:15:07.902057    9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Nov 07 17:20:07 functional-170143 kubelet[9654]: W1107 17:20:07.905196    9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Nov 07 17:25:07 functional-170143 kubelet[9654]: W1107 17:25:07.908739    9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Nov 07 17:30:07 functional-170143 kubelet[9654]: W1107 17:30:07.911745    9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Nov 07 17:35:07 functional-170143 kubelet[9654]: W1107 17:35:07.915024    9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Nov 07 17:40:07 functional-170143 kubelet[9654]: W1107 17:40:07.917616    9654 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [0300249d11be] <==
	* I1107 17:05:19.974352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 17:05:20.075744       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 17:05:20.075828       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 17:05:37.508727       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 17:05:37.509275       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-170143_170128c6-60bf-4e39-86d5-21a9bfc3e342!
	I1107 17:05:37.509220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca31cf32-7adf-45c8-a4ba-52aeffd000e3", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-170143_170128c6-60bf-4e39-86d5-21a9bfc3e342 became leader
	I1107 17:05:37.610584       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-170143_170128c6-60bf-4e39-86d5-21a9bfc3e342!
	I1107 17:06:20.187126       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1107 17:06:20.187394       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    020c2a1a-59da-42b2-a15b-d15e3c9a4150 388 0 2022-11-07 17:03:19 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-11-07 17:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-285ccea9-bf55-480d-a198-16b12f688a34 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  285ccea9-bf55-480d-a198-16b12f688a34 710 0 2022-11-07 17:06:20 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-11-07 17:06:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-11-07 17:06:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1107 17:06:20.188008       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"285ccea9-bf55-480d-a198-16b12f688a34", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1107 17:06:20.188319       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-285ccea9-bf55-480d-a198-16b12f688a34" provisioned
	I1107 17:06:20.188351       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1107 17:06:20.188361       1 volume_store.go:212] Trying to save persistentvolume "pvc-285ccea9-bf55-480d-a198-16b12f688a34"
	I1107 17:06:20.207704       1 volume_store.go:219] persistentvolume "pvc-285ccea9-bf55-480d-a198-16b12f688a34" saved
	I1107 17:06:20.208007       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"285ccea9-bf55-480d-a198-16b12f688a34", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-285ccea9-bf55-480d-a198-16b12f688a34
	
	* 
	* ==> storage-provisioner [2ac37c176ed7] <==
	* I1107 17:04:09.726847       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 17:04:09.781152       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 17:04:09.781305       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 17:04:27.213490       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 17:04:27.213689       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca31cf32-7adf-45c8-a4ba-52aeffd000e3", APIVersion:"v1", ResourceVersion:"536", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-170143_aaeac418-20d8-4c19-a3a8-6d2095862b64 became leader
	I1107 17:04:27.213805       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-170143_aaeac418-20d8-4c19-a3a8-6d2095862b64!
	I1107 17:04:27.314709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-170143_aaeac418-20d8-4c19-a3a8-6d2095862b64!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-170143 -n functional-170143
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-170143 -n functional-170143: (1.6587836s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-170143 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-170143 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-170143 describe pod : exit status 1 (182.8818ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-170143 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2172.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (45.12s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-182142 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p pause-182142 --alsologtostderr -v=5: exit status 80 (6.7455618s)

                                                
                                                
-- stdout --
	* Pausing node pause-182142 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 18:24:41.602126    9724 out.go:296] Setting OutFile to fd 1588 ...
	I1107 18:24:41.686129    9724 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:24:41.686129    9724 out.go:309] Setting ErrFile to fd 1628...
	I1107 18:24:41.686129    9724 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:24:41.700128    9724 out.go:303] Setting JSON to false
	I1107 18:24:41.700128    9724 mustload.go:65] Loading cluster: pause-182142
	I1107 18:24:41.701135    9724 config.go:180] Loaded profile config "pause-182142": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:24:41.720117    9724 cli_runner.go:164] Run: docker container inspect pause-182142 --format={{.State.Status}}
	I1107 18:24:41.961902    9724 host.go:66] Checking if "pause-182142" exists ...
	I1107 18:24:41.971906    9724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-182142
	I1107 18:24:42.199917    9724 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gate
s: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.28.0/minikube-v1.28.0-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.28.0-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=2621
44) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube2:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-182142 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 socket-vmnet-client-path:/opt/socket_vmnet/bin/socket_vmnet_client socket-vmnet-path:/var/run/socket_vmnet ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1107 18:24:42.216860    9724 out.go:177] * Pausing node pause-182142 ... 
	I1107 18:24:42.235939    9724 host.go:66] Checking if "pause-182142" exists ...
	I1107 18:24:42.253917    9724 ssh_runner.go:195] Run: systemctl --version
	I1107 18:24:42.265906    9724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-182142
	I1107 18:24:42.499883    9724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59970 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\pause-182142\id_rsa Username:docker}
	I1107 18:24:42.747436    9724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 18:24:42.861378    9724 pause.go:51] kubelet running: true
	I1107 18:24:42.871375    9724 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1107 18:24:43.586332    9724 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I1107 18:24:43.658330    9724 docker.go:461] Pausing containers: [5ee28a460839 6ea919f924e3 704cf9f28fe3 004295df2ac3 4a1bbbda88fa f5c7a1c74361 36d3f2cad4b5 d747119d4a25 7b2ec2b1aa93 e26886dd54b5 f81ca5df5d9b e436e661c240 5f96bae2a07a 173026e317b7]
	I1107 18:24:43.668335    9724 ssh_runner.go:195] Run: docker pause 5ee28a460839 6ea919f924e3 704cf9f28fe3 004295df2ac3 4a1bbbda88fa f5c7a1c74361 36d3f2cad4b5 d747119d4a25 7b2ec2b1aa93 e26886dd54b5 f81ca5df5d9b e436e661c240 5f96bae2a07a 173026e317b7
	I1107 18:24:47.223084    9724 ssh_runner.go:235] Completed: docker pause 5ee28a460839 6ea919f924e3 704cf9f28fe3 004295df2ac3 4a1bbbda88fa f5c7a1c74361 36d3f2cad4b5 d747119d4a25 7b2ec2b1aa93 e26886dd54b5 f81ca5df5d9b e436e661c240 5f96bae2a07a 173026e317b7: (3.5547114s)
	I1107 18:24:47.227362    9724 out.go:177] 
	W1107 18:24:47.230897    9724 out.go:239] X Exiting due to GUEST_PAUSE: pausing containers: docker: docker pause 5ee28a460839 6ea919f924e3 704cf9f28fe3 004295df2ac3 4a1bbbda88fa f5c7a1c74361 36d3f2cad4b5 d747119d4a25 7b2ec2b1aa93 e26886dd54b5 f81ca5df5d9b e436e661c240 5f96bae2a07a 173026e317b7: Process exited with status 1
	stdout:
	5ee28a460839
	6ea919f924e3
	704cf9f28fe3
	004295df2ac3
	4a1bbbda88fa
	f5c7a1c74361
	36d3f2cad4b5
	d747119d4a25
	7b2ec2b1aa93
	f81ca5df5d9b
	e436e661c240
	5f96bae2a07a
	173026e317b7
	
	stderr:
	Error response from daemon: Cannot pause container e26886dd54b5ad440ef84a02f910535ca78d4b3867ee8ee5b330072871b2da89: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: pausing containers: docker: docker pause 5ee28a460839 6ea919f924e3 704cf9f28fe3 004295df2ac3 4a1bbbda88fa f5c7a1c74361 36d3f2cad4b5 d747119d4a25 7b2ec2b1aa93 e26886dd54b5 f81ca5df5d9b e436e661c240 5f96bae2a07a 173026e317b7: Process exited with status 1
	stdout:
	5ee28a460839
	6ea919f924e3
	704cf9f28fe3
	004295df2ac3
	4a1bbbda88fa
	f5c7a1c74361
	36d3f2cad4b5
	d747119d4a25
	7b2ec2b1aa93
	f81ca5df5d9b
	e436e661c240
	5f96bae2a07a
	173026e317b7
	
	stderr:
	Error response from daemon: Cannot pause container e26886dd54b5ad440ef84a02f910535ca78d4b3867ee8ee5b330072871b2da89: OCI runtime pause failed: unable to freeze: unknown
	
	W1107 18:24:47.230977    9724 out.go:239] * 
	* 
	W1107 18:24:47.985960    9724 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_33.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_33.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 18:24:47.989960    9724 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-windows-amd64.exe pause -p pause-182142 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-182142
helpers_test.go:235: (dbg) docker inspect pause-182142:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8",
	        "Created": "2022-11-07T18:22:15.2545351Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T18:22:16.2443546Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8/hosts",
	        "LogPath": "/var/lib/docker/containers/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8-json.log",
	        "Name": "/pause-182142",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-182142:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-182142",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07a345318becdb73f6c81537a3396ae5d4a9c879beb4ceebf04d5237fefc312d-init/diff:/var/lib/docker/overlay2/5ba40928978efc1ee3b35421e2a49e4e2a7d59d61b07bb8e461b5416c8a7cee7/diff:/var/lib/docker/overlay2/67e02326f2fb9638b3c744df240d022783ccecb7d0e13e0d4028b0f8bf17e69d/diff:/var/lib/docker/overlay2/2df41d3bee4190176a765702135566ea66b1390e8b91dfa86b8de2bce135a93a/diff:/var/lib/docker/overlay2/3ec94dbaa89905250e2398ca72e3bb9ff5dccddd8b415085183015f908fee35f/diff:/var/lib/docker/overlay2/3ff2e3a3d014a61bdc0a08d62538ff8c84667c0284decf8ecda52f68283ff0fb/diff:/var/lib/docker/overlay2/bec12fe29cd5fb8e9a7e5bb928cb25b20213dd7883f37ea7dd0a8e3bc0351052/diff:/var/lib/docker/overlay2/21c29267c8a16c82c45149aee257177584b1ce7c75fa787decd6c03a640936f7/diff:/var/lib/docker/overlay2/5552452888ed9ac6a45e539159cccc1e649ef7ad0bc04a4418eebab44d92e666/diff:/var/lib/docker/overlay2/3f5659bfc1d27650ea46807074a281c02900176a5f42ac3ce1101e612aea49a4/diff:/var/lib/docker/overlay2/95ed14
d67ee43712c9773f372551bf224bbcbf05234904cb75bfe650e5a9b431/diff:/var/lib/docker/overlay2/c61bea1335a18e64dabe990546948a49a1e791d643b48037370421d0751659c3/diff:/var/lib/docker/overlay2/4bceff48ae8e97fbcd073948091f9c7dbeadc230b98de67471c5522b9c386672/diff:/var/lib/docker/overlay2/23bacba3c342644af413c4af4dd2d246c778f3794857f6249648a877a053a59c/diff:/var/lib/docker/overlay2/b52423693db548690f91d1cd1a682e7dcffed995839ad13f0c371c2d681d58ae/diff:/var/lib/docker/overlay2/78ed02992e8d5b101283c1328bd5aaa12d7e0ca041f267cc87df49ef21d9bb03/diff:/var/lib/docker/overlay2/46157251f5db6a6570ed965e54b6f9c571885b984df59133027ccf004684e35b/diff:/var/lib/docker/overlay2/a7138fb69aba5dad874e92c39963591ac31b8c00283be1cef1f97bb03e29e95b/diff:/var/lib/docker/overlay2/c758e4b48f926dc6128c8daee3fc24a31cf68d0c856315d42cd496a0dbdd8539/diff:/var/lib/docker/overlay2/177fe0e8ee94dbc81e32cb39d5d299febe5bdcc240161d4b1835668fe03b5209/diff:/var/lib/docker/overlay2/f079d80f0588e1138baa92eb5c6d7f1bd3b748adbba870d85b973e09f3ebf494/diff:/var/lib/d
ocker/overlay2/c3813cada301ad2ba06f263b5ccf3e0b01ae80626c1d9caa7145c8b44f41463e/diff:/var/lib/docker/overlay2/72b362c3acbe525943f481d496d0727bf0f806a59448acd97435a15c292fef7e/diff:/var/lib/docker/overlay2/f3dae2918bbd88ecf6fa92ce58b695b5b7c2da5701725c4de1346a5152bfb602/diff:/var/lib/docker/overlay2/a9aa7189cf37379174133f86b5cd20db821dffd303a69bb90d8b837ef9314cae/diff:/var/lib/docker/overlay2/f2580cf4053e61b8bea5cd979c14376e4cb354a10cabb06928d54c1685d717ad/diff:/var/lib/docker/overlay2/935a0de03d362bfbb94f9caed18a864b47c082fd03de4bfa5ea3296602ab831a/diff:/var/lib/docker/overlay2/3cff685fb531dd4d8712d453d4acd726381268d9ddcd0c57a932182872cbf384/diff:/var/lib/docker/overlay2/112b35fd6eb67f7dfac734ed32e36fb98e01f15bd9c239c2f80d0bf851060ea4/diff:/var/lib/docker/overlay2/01282a02b23965342a99a1d1cc886e98e3cdc825c6ca80b04373c4406c9aa4f3/diff:/var/lib/docker/overlay2/bd54f122cc195ba2f796884b001defe75facaad0c89ccc34a6f6465aaa917fe9/diff:/var/lib/docker/overlay2/20dfd6c01cb2b243e552c3e422dd7b551e0db65fb0c630c438801d475ad
f77a1/diff:/var/lib/docker/overlay2/411ec7d4646f3c8ed6c04c781054e871311645faa7de90212e5c5454192092fd/diff:/var/lib/docker/overlay2/bb233cf9945b014c96c4bcbef2e9ef2f1e040f65910db652eb424af82e93768d/diff:/var/lib/docker/overlay2/a6de3a7d987b965f42f8379040ffd401aad9d38f67ac126754e8d62b555407aa/diff:/var/lib/docker/overlay2/b2ce15147e01c2b1eff488a0aec2cdcf950484589bf948d4b1f3a8a876232d09/diff:/var/lib/docker/overlay2/8a119f66dd46b7cc5f5ba77598b3979bf10ddf84081ea4872ec2ce3375d41684/diff:/var/lib/docker/overlay2/b3c7202a41b63567d929a27b911caefdba403bae7ea5f11b89f717ecb1013955/diff:/var/lib/docker/overlay2/d87eb4edb251e5b57913be1bf6653b8ad0988f5aefaf73d12984c2b91801af17/diff:/var/lib/docker/overlay2/df756f877bb755e1124e9ccaa62bd29d76f04822f12787db45118fcba1de223d/diff:/var/lib/docker/overlay2/ba2334ebb657af4b27997ce445bfc2ce0f740fb6fe3edba5a315042fd325a7d3/diff:/var/lib/docker/overlay2/ba4ef7e8994716049d65e5b49db39352db8c77cd45684b9516c827f4114572cb/diff:/var/lib/docker/overlay2/3df6d706ee5529d758e5ed38fd5b49f5733ae7
45d03cb146ad24eb8be305a2a3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07a345318becdb73f6c81537a3396ae5d4a9c879beb4ceebf04d5237fefc312d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07a345318becdb73f6c81537a3396ae5d4a9c879beb4ceebf04d5237fefc312d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07a345318becdb73f6c81537a3396ae5d4a9c879beb4ceebf04d5237fefc312d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-182142",
	                "Source": "/var/lib/docker/volumes/pause-182142/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-182142",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-182142",
	                "name.minikube.sigs.k8s.io": "pause-182142",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f452013242388443d0ebd07ca1597ed574e3631fe558440a66a0c2a59fe5008",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59970"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59971"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59972"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9f4520132423",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-182142": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8dab1000fa5c",
	                        "pause-182142"
	                    ],
	                    "NetworkID": "1018b4386d98e45631bc5cc5c04e928ff460f9fb80c882423db17bf4c3825a53",
	                    "EndpointID": "f6181ff67f4d8f32d02a2e5adbbc907f1ce3d54d8a03065a30a352b38dda71c3",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-182142 -n pause-182142
E1107 18:24:49.398662    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-182142 -n pause-182142: exit status 2 (1.6397526s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-182142 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-182142 logs -n 25: (13.3133001s)
helpers_test.go:252: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p pause-182142 --memory=2048  | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:21 GMT | 07 Nov 22 18:23 GMT |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=all --driver=docker     |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-181846         | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:21 GMT | 07 Nov 22 18:22 GMT |
	|         | --no-kubernetes                |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-181846      | running-upgrade-181846    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:23 GMT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| ssh     | -p NoKubernetes-181846 sudo    | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT |                     |
	|         | systemctl is-active --quiet    |                           |                   |         |                     |                     |
	|         | service kubelet                |                           |                   |         |                     |                     |
	| profile | list                           | minikube                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	| profile | list --output=json             | minikube                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	| stop    | -p NoKubernetes-181846         | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	| start   | -p NoKubernetes-181846         | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| ssh     | -p NoKubernetes-181846 sudo    | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT |                     |
	|         | systemctl is-active --quiet    |                           |                   |         |                     |                     |
	|         | service kubelet                |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-181846         | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	| start   | -p stopped-upgrade-181846      | stopped-upgrade-181846    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:23 GMT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| start   | -p force-systemd-flag-182254   | force-systemd-flag-182254 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:24 GMT |
	|         | --memory=2048 --force-systemd  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-181846      | running-upgrade-181846    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:23 GMT |
	| delete  | -p flannel-182327              | flannel-182327            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:23 GMT |
	| start   | -p pause-182142                | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:24 GMT |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| delete  | -p custom-flannel-182329       | custom-flannel-182329     | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:23 GMT |
	| start   | -p force-systemd-env-182331    | force-systemd-env-182331  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-181846      | stopped-upgrade-181846    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:24 GMT |
	| start   | -p cert-expiration-182403      | cert-expiration-182403    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m           |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| pause   | -p pause-182142                | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT | 07 Nov 22 18:24 GMT |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-182254      | force-systemd-flag-182254 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT | 07 Nov 22 18:24 GMT |
	|         | ssh docker info --format       |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-182254   | force-systemd-flag-182254 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT | 07 Nov 22 18:24 GMT |
	| unpause | -p pause-182142                | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT | 07 Nov 22 18:24 GMT |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| pause   | -p pause-182142                | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| start   | -p docker-flags-182447         | docker-flags-182447       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT |                     |
	|         | --cache-images=false           |                           |                   |         |                     |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=false                   |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR           |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT           |                           |                   |         |                     |                     |
	|         | --docker-opt=debug             |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 18:24:48
	Running on machine: minikube2
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 18:24:48.259813   10120 out.go:296] Setting OutFile to fd 1768 ...
	I1107 18:24:48.333835   10120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:24:48.333835   10120 out.go:309] Setting ErrFile to fd 1572...
	I1107 18:24:48.333835   10120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:24:48.363334   10120 out.go:303] Setting JSON to false
	I1107 18:24:48.366298   10120 start.go:116] hostinfo: {"hostname":"minikube2","uptime":10125,"bootTime":1667835363,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 18:24:48.366298   10120 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 18:24:48.371296   10120 out.go:177] * [docker-flags-182447] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 18:24:48.377280   10120 notify.go:220] Checking for updates...
	I1107 18:24:48.379293   10120 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:24:48.381294   10120 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 18:24:48.383285   10120 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 18:24:48.386290   10120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 18:24:44.507482    6492 main.go:134] libmachine: Using SSH client type: native
	I1107 18:24:44.508126    6492 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 60289 <nil> <nil>}
	I1107 18:24:44.508126    6492 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 18:24:47.696852    6492 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 18:24:44.225723000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 18:24:47.696852    6492 machine.go:91] provisioned docker machine in 6.4344555s
	I1107 18:24:47.696852    6492 client.go:171] LocalClient.Create took 41.4325832s
	I1107 18:24:47.696852    6492 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-182403" took 41.4325832s
	I1107 18:24:47.696852    6492 start.go:300] post-start starting for "cert-expiration-182403" (driver="docker")
	I1107 18:24:47.696852    6492 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 18:24:47.709849    6492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 18:24:47.716856    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:47.929701    6492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60289 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cert-expiration-182403\id_rsa Username:docker}
	I1107 18:24:48.084959    6492 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 18:24:48.094973    6492 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 18:24:48.094973    6492 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 18:24:48.094973    6492 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 18:24:48.094973    6492 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 18:24:48.094973    6492 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1107 18:24:48.094973    6492 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1107 18:24:48.095958    6492 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem -> 99482.pem in /etc/ssl/certs
	I1107 18:24:48.107948    6492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 18:24:48.128955    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem --> /etc/ssl/certs/99482.pem (1708 bytes)
	I1107 18:24:48.180957    6492 start.go:303] post-start completed in 484.1004ms
	I1107 18:24:48.190959    6492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-182403
	I1107 18:24:48.408301    6492 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\config.json ...
	I1107 18:24:48.432297    6492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 18:24:48.449313    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:48.661377    6492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60289 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cert-expiration-182403\id_rsa Username:docker}
	I1107 18:24:48.879372    6492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 18:24:48.890385    6492 start.go:128] duration metric: createHost completed in 42.6328843s
	I1107 18:24:48.890385    6492 start.go:83] releasing machines lock for "cert-expiration-182403", held for 42.6328843s
	I1107 18:24:48.898372    6492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-182403
	I1107 18:24:49.163902    6492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 18:24:49.172898    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:49.174887    6492 ssh_runner.go:195] Run: systemctl --version
	I1107 18:24:49.182900    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:48.389295   10120 config.go:180] Loaded profile config "cert-expiration-182403": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:24:48.390287   10120 config.go:180] Loaded profile config "force-systemd-env-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:24:48.390287   10120 config.go:180] Loaded profile config "pause-182142": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:24:48.390287   10120 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 18:24:48.740374   10120 docker.go:137] docker version: linux-20.10.20
	I1107 18:24:48.753371   10120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:24:49.399683   10120 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:24:48.8957443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:24:49.403686   10120 out.go:177] * Using the docker driver based on user configuration
	I1107 18:24:49.405682   10120 start.go:282] selected driver: docker
	I1107 18:24:49.405682   10120 start.go:808] validating driver "docker" against <nil>
	I1107 18:24:49.405682   10120 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 18:24:49.478686   10120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:24:50.111570   10120 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:24:49.6290577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:24:50.111570   10120 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 18:24:50.112569   10120 start_flags.go:896] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1107 18:24:50.115557   10120 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 18:24:50.117603   10120 cni.go:95] Creating CNI manager for ""
	I1107 18:24:50.117603   10120 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 18:24:50.117603   10120 start_flags.go:317] config:
	{Name:docker-flags-182447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:docker-flags-182447 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:24:50.120564   10120 out.go:177] * Starting control plane node docker-flags-182447 in cluster docker-flags-182447
	I1107 18:24:50.122560   10120 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 18:24:50.126571   10120 out.go:177] * Pulling base image ...
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 18:22:17 UTC, end at Mon 2022-11-07 18:24:51 UTC. --
	Nov 07 18:23:43 pause-182142 dockerd[4189]: time="2022-11-07T18:23:43.325352400Z" level=info msg="Loading containers: start."
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.121347400Z" level=info msg="ignoring event" container=851a800aae3575a79623bc91882895a7d9fe1f06aef9b21f78a16f4e3bf9169d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.121653700Z" level=info msg="ignoring event" container=bf7a05217ed04b82f3fd1267805f11862baab37cc54fd304cd7d1b70447080cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.220953300Z" level=info msg="ignoring event" container=0ff9f33858c4c6c027328bfc9b028a790542ded89144be30c71ae89f8a4eb3ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.222991600Z" level=info msg="ignoring event" container=67c6693b310721e9765185a485c166be849334dc3839615decdc22ec602f72eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.223036300Z" level=info msg="ignoring event" container=a091e583621f94f538316eb23a541e918ffcaa794e99cf352841ddfc91cde8ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.223069700Z" level=info msg="ignoring event" container=d38f5973e26926fd8594dfdd0060ab415b61f7214b498830c2b5a3be95eb940d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:53 pause-182142 dockerd[4189]: time="2022-11-07T18:23:53.821475700Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=7bf4bc1418d534e8160cbb45b2a03d9aeaad8875733b924e8763f292ae815ecd
	Nov 07 18:24:00 pause-182142 dockerd[4189]: time="2022-11-07T18:24:00.266583900Z" level=info msg="ignoring event" container=7bf4bc1418d534e8160cbb45b2a03d9aeaad8875733b924e8763f292ae815ecd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.150898200Z" level=info msg="Removing stale sandbox 4f5e2db58e26505155192cac70dc335a725d6a563270e429afa405d8fa58197d (0ff9f33858c4c6c027328bfc9b028a790542ded89144be30c71ae89f8a4eb3ad)"
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.158012700Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 43298edcaf6f2c60ac4e071bb8e4b28578c935008e83d0cff345e3f27983aa7f 9603d7a4c936ae87cd4ae4a91cf7499c655d9672190cd054812c80b0e6b38fcc], retrying...."
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.410625300Z" level=info msg="Removing stale sandbox 9aeb9f0fe746e66dbaefcb26b7f670857bd77aa4c742fe73312e2fe118d9b340 (d38f5973e26926fd8594dfdd0060ab415b61f7214b498830c2b5a3be95eb940d)"
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.424506800Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 43298edcaf6f2c60ac4e071bb8e4b28578c935008e83d0cff345e3f27983aa7f a125cd2344a61e8a9d43a3767278785f6e90d8fbbc8a0808c6135f2287d76f5c], retrying...."
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.693052800Z" level=info msg="Removing stale sandbox da3270f618d795d26620543bc14843f67a9f4d1a7be07872beb956dbe553462f (a091e583621f94f538316eb23a541e918ffcaa794e99cf352841ddfc91cde8ff)"
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.972661400Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 43298edcaf6f2c60ac4e071bb8e4b28578c935008e83d0cff345e3f27983aa7f c789841f4bf205117a7b766e19e75294711c7c21a32c5e9c5cce441924754c43], retrying...."
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.137863500Z" level=info msg="Removing stale sandbox ecb8f973e730f79435b0a9470976153775800cbc9aad1d31e1051b74ec7a7724 (67c6693b310721e9765185a485c166be849334dc3839615decdc22ec602f72eb)"
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.151625100Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 43298edcaf6f2c60ac4e071bb8e4b28578c935008e83d0cff345e3f27983aa7f 3b0a1e9a297a039a99f7f61713a54ea08710bf93a22a9225a60fe3fbf9fc3db5], retrying...."
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.293828000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.431484800Z" level=info msg="Loading containers: done."
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.522088200Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.522308900Z" level=info msg="Daemon has completed initialization"
	Nov 07 18:24:07 pause-182142 systemd[1]: Started Docker Application Container Engine.
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.587136000Z" level=info msg="API listen on [::]:2376"
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.595830900Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 18:24:44 pause-182142 dockerd[4189]: time="2022-11-07T18:24:44.255992300Z" level=error msg="Handler for POST /v1.41/containers/e26886dd54b5/pause returned error: Cannot pause container e26886dd54b5ad440ef84a02f910535ca78d4b3867ee8ee5b330072871b2da89: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	5ee28a4608398       6e38f40d628db       19 seconds ago       Running             storage-provisioner       0                   6ea919f924e35
	704cf9f28fe3a       beaaf00edd38a       37 seconds ago       Running             kube-proxy                2                   4a1bbbda88fad
	004295df2ac3e       0346dbd74bcb9       38 seconds ago       Running             kube-apiserver            2                   f5c7a1c743618
	36d3f2cad4b50       5185b96f0becf       41 seconds ago       Running             coredns                   1                   5f96bae2a07ac
	d747119d4a252       6039992312758       41 seconds ago       Running             kube-controller-manager   2                   f81ca5df5d9bc
	7b2ec2b1aa93b       6d23ec0e8b87e       41 seconds ago       Running             kube-scheduler            1                   e436e661c2403
	e26886dd54b5a       a8a176a5d5d69       41 seconds ago       Running             etcd                      1                   173026e317b76
	851a800aae357       6039992312758       About a minute ago   Exited              kube-controller-manager   1                   67c6693b31072
	7bf4bc1418d53       0346dbd74bcb9       About a minute ago   Exited              kube-apiserver            1                   0ff9f33858c4c
	bf7a05217ed04       beaaf00edd38a       About a minute ago   Exited              kube-proxy                1                   a091e583621f9
	3866e1d50d618       5185b96f0becf       About a minute ago   Exited              coredns                   0                   c64fe72b7d58d
	733b0d85b57b0       6d23ec0e8b87e       2 minutes ago        Exited              kube-scheduler            0                   f2bac5082004e
	07e066724ef96       a8a176a5d5d69       2 minutes ago        Exited              etcd                      0                   0cd768ca00256
	
	* 
	* ==> coredns [36d3f2cad4b5] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> coredns [3866e1d50d61] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Nov 7 17:55] WSL2: Performing memory compaction.
	[Nov 7 17:56] WSL2: Performing memory compaction.
	[Nov 7 17:57] WSL2: Performing memory compaction.
	[Nov 7 17:59] WSL2: Performing memory compaction.
	[Nov 7 18:00] WSL2: Performing memory compaction.
	[Nov 7 18:01] WSL2: Performing memory compaction.
	[Nov 7 18:03] WSL2: Performing memory compaction.
	[Nov 7 18:04] WSL2: Performing memory compaction.
	[Nov 7 18:05] WSL2: Performing memory compaction.
	[Nov 7 18:06] WSL2: Performing memory compaction.
	[Nov 7 18:07] WSL2: Performing memory compaction.
	[Nov 7 18:08] WSL2: Performing memory compaction.
	[Nov 7 18:10] WSL2: Performing memory compaction.
	[Nov 7 18:11] WSL2: Performing memory compaction.
	[Nov 7 18:12] WSL2: Performing memory compaction.
	[Nov 7 18:13] WSL2: Performing memory compaction.
	[Nov 7 18:15] WSL2: Performing memory compaction.
	[Nov 7 18:16] WSL2: Performing memory compaction.
	[Nov 7 18:17] WSL2: Performing memory compaction.
	[Nov 7 18:18] WSL2: Performing memory compaction.
	[Nov 7 18:19] WSL2: Performing memory compaction.
	[Nov 7 18:20] process 'docker/tmp/qemu-check426843351/check' started with executable stack
	[Nov 7 18:21] WSL2: Performing memory compaction.
	[Nov 7 18:23] WSL2: Performing memory compaction.
	[Nov 7 18:24] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [07e066724ef9] <==
	* WARNING: 2022/11/07 18:23:26 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-11-07T18:23:27.340Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.3886524s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:23:27.340Z","caller":"traceutil/trace.go:171","msg":"trace[884282194] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:398; }","duration":"3.3889682s","start":"2022-11-07T18:23:23.951Z","end":"2022-11-07T18:23:27.340Z","steps":["trace[884282194] 'range keys from in-memory index tree'  (duration: 3.3886208s)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:23:27.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.0554101s","expected-duration":"100ms","prefix":"","request":"header:<ID:2289944428977903656 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-565d847f94-5kjqw.1725608e3de9d6e0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-565d847f94-5kjqw.1725608e3de9d6e0\" value_size:663 lease:2289944428977903179 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-11-07T18:23:27.341Z","caller":"traceutil/trace.go:171","msg":"trace[258053472] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:411; }","duration":"7.0049861s","start":"2022-11-07T18:23:20.336Z","end":"2022-11-07T18:23:27.341Z","steps":["trace[258053472] 'read index received'  (duration: 3.9486418s)","trace[258053472] 'applied index is now lower than readState.Index'  (duration: 3.0563403s)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T18:23:27.341Z","caller":"traceutil/trace.go:171","msg":"trace[1600712388] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"6.2874973s","start":"2022-11-07T18:23:21.054Z","end":"2022-11-07T18:23:27.341Z","steps":["trace[1600712388] 'process raft request'  (duration: 3.2314658s)","trace[1600712388] 'compare'  (duration: 3.054117s)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:23:27.342Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:23:21.054Z","time spent":"6.287582s","remote":"127.0.0.1:37120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":751,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-565d847f94-5kjqw.1725608e3de9d6e0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-565d847f94-5kjqw.1725608e3de9d6e0\" value_size:663 lease:2289944428977903179 >> failure:<>"}
	{"level":"warn","ts":"2022-11-07T18:23:27.342Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"7.0053359s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-182142\" ","response":"range_response_count:1 size:4548"}
	{"level":"info","ts":"2022-11-07T18:23:27.342Z","caller":"traceutil/trace.go:171","msg":"trace[1338312172] range","detail":"{range_begin:/registry/minions/pause-182142; range_end:; response_count:1; response_revision:399; }","duration":"7.0054294s","start":"2022-11-07T18:23:20.336Z","end":"2022-11-07T18:23:27.342Z","steps":["trace[1338312172] 'agreement among raft nodes before linearized reading'  (duration: 7.0052822s)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:23:27.342Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:23:20.336Z","time spent":"7.0055702s","remote":"127.0.0.1:37172","response type":"/etcdserverpb.KV/Range","request count":0,"request size":32,"response count":1,"response size":4572,"request content":"key:\"/registry/minions/pause-182142\" "}
	{"level":"warn","ts":"2022-11-07T18:23:27.351Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"3.0409771s","expected-duration":"1s"}
	{"level":"warn","ts":"2022-11-07T18:23:27.352Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"973.117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"warn","ts":"2022-11-07T18:23:27.352Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"793.5912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:23:27.353Z","caller":"traceutil/trace.go:171","msg":"trace[782749255] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:399; }","duration":"973.3128ms","start":"2022-11-07T18:23:26.379Z","end":"2022-11-07T18:23:27.353Z","steps":["trace[782749255] 'agreement among raft nodes before linearized reading'  (duration: 972.9107ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:23:27.353Z","caller":"traceutil/trace.go:171","msg":"trace[1461185123] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:399; }","duration":"793.9802ms","start":"2022-11-07T18:23:26.559Z","end":"2022-11-07T18:23:27.353Z","steps":["trace[1461185123] 'agreement among raft nodes before linearized reading'  (duration: 793.5552ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:23:27.353Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:23:26.379Z","time spent":"973.4656ms","remote":"127.0.0.1:37164","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":365,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-11-07T18:23:27.353Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:23:26.559Z","time spent":"794.0548ms","remote":"127.0.0.1:37200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-11-07T18:23:37.438Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-07T18:23:37.438Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-182142","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/11/07 18:23:37 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/11/07 18:23:37 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-07T18:23:37.618Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-11-07T18:23:37.630Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T18:23:37.631Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T18:23:37.631Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-182142","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [e26886dd54b5] <==
	* {"level":"warn","ts":"2022-11-07T18:24:34.243Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.0203ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-565d847f94-7hl75.17256092a95555a4\" ","response":"range_response_count:1 size:687"}
	{"level":"info","ts":"2022-11-07T18:24:34.243Z","caller":"traceutil/trace.go:171","msg":"trace[1565716881] range","detail":"{range_begin:/registry/events/kube-system/coredns-565d847f94-7hl75.17256092a95555a4; range_end:; response_count:1; response_revision:488; }","duration":"121.262ms","start":"2022-11-07T18:24:34.122Z","end":"2022-11-07T18:24:34.243Z","steps":["trace[1565716881] 'agreement among raft nodes before linearized reading'  (duration: 26.0553ms)","trace[1565716881] 'range keys from in-memory index tree'  (duration: 94.9425ms)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T18:24:35.138Z","caller":"traceutil/trace.go:171","msg":"trace[1241044910] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:520; }","duration":"203.6436ms","start":"2022-11-07T18:24:34.934Z","end":"2022-11-07T18:24:35.138Z","steps":["trace[1241044910] 'read index received'  (duration: 203.6323ms)","trace[1241044910] 'applied index is now lower than readState.Index'  (duration: 8.4µs)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:24:35.150Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"215.3801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2022-11-07T18:24:35.150Z","caller":"traceutil/trace.go:171","msg":"trace[1403645031] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:494; }","duration":"215.5537ms","start":"2022-11-07T18:24:34.934Z","end":"2022-11-07T18:24:35.150Z","steps":["trace[1403645031] 'agreement among raft nodes before linearized reading'  (duration: 203.8569ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:24:42.356Z","caller":"traceutil/trace.go:171","msg":"trace[767784216] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"224.45ms","start":"2022-11-07T18:24:42.131Z","end":"2022-11-07T18:24:42.356Z","steps":["trace[767784216] 'process raft request'  (duration: 154.6735ms)","trace[767784216] 'compare'  (duration: 69.5201ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:24:43.245Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:43.249Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"527.5134ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289944429000143197 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-182142.172560a13e61a5e0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-182142.172560a13e61a5e0\" value_size:514 lease:2289944429000142961 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-11-07T18:24:43.249Z","caller":"traceutil/trace.go:171","msg":"trace[1999920095] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"529.0982ms","start":"2022-11-07T18:24:42.720Z","end":"2022-11-07T18:24:43.249Z","steps":["trace[1999920095] 'compare'  (duration: 527.1569ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:24:43.249Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:24:42.720Z","time spent":"529.2561ms","remote":"127.0.0.1:39926","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":586,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-182142.172560a13e61a5e0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-182142.172560a13e61a5e0\" value_size:514 lease:2289944429000142961 >> failure:<>"}
	WARNING: 2022/11/07 18:24:43 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-11-07T18:24:43.506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:24:42.841Z","time spent":"664.9013ms","remote":"127.0.0.1:39946","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	WARNING: 2022/11/07 18:24:43 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-11-07T18:24:43.746Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:44.247Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:44.749Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:45.250Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:45.751Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:46.252Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:46.754Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:47.176Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"4.4555571s","expected-duration":"1s"}
	{"level":"info","ts":"2022-11-07T18:24:47.212Z","caller":"traceutil/trace.go:171","msg":"trace[741265500] linearizableReadLoop","detail":"{readStateIndex:543; appliedIndex:541; }","duration":"4.4679427s","start":"2022-11-07T18:24:42.744Z","end":"2022-11-07T18:24:47.212Z","steps":["trace[741265500] 'read index received'  (duration: 4.432951s)","trace[741265500] 'applied index is now lower than readState.Index'  (duration: 34.9871ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:24:47.212Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"4.468278s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1112"}
	{"level":"info","ts":"2022-11-07T18:24:47.212Z","caller":"traceutil/trace.go:171","msg":"trace[1159948020] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:515; }","duration":"4.468355s","start":"2022-11-07T18:24:42.744Z","end":"2022-11-07T18:24:47.212Z","steps":["trace[1159948020] 'agreement among raft nodes before linearized reading'  (duration: 4.4682304s)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:24:47.212Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:24:42.744Z","time spent":"4.4684298s","remote":"127.0.0.1:39944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	
	* 
	* ==> kernel <==
	*  18:25:02 up  1:40,  0 users,  load average: 8.07, 7.47, 4.43
	Linux pause-182142 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [004295df2ac3] <==
	* Trace[6906245]: ---"About to write a response" 608ms (18:24:31.972)
	Trace[6906245]: [609.538ms] [609.538ms] END
	I1107 18:24:43.250660       1 trace.go:205] Trace[259362888]: "Create etcd3" audit-id:83ab0b48-cc26-4416-8e9d-0e6f961e15fe,key:/events/default/pause-182142.172560a13e61a5e0,type:*core.Event (07-Nov-2022 18:24:42.667) (total time: 583ms):
	Trace[259362888]: ---"Txn call finished" err:<nil> 582ms (18:24:43.250)
	Trace[259362888]: [583.0138ms] [583.0138ms] END
	I1107 18:24:43.251339       1 trace.go:205] Trace[1047769700]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:83ab0b48-cc26-4416-8e9d-0e6f961e15fe,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:24:42.665) (total time: 585ms):
	Trace[1047769700]: ---"Write to database call finished" len:254,err:<nil> 584ms (18:24:43.250)
	Trace[1047769700]: [585.6217ms] [585.6217ms] END
	{"level":"warn","ts":"2022-11-07T18:24:43.505Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ac9500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1107 18:24:43.505859       1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
	E1107 18:24:43.506024       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E1107 18:24:43.506031       1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
	{"level":"warn","ts":"2022-11-07T18:24:43.505Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0019f5dc0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1107 18:24:43.506083       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 16.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	I1107 18:24:43.506124       1 trace.go:205] Trace[249348945]: "GuaranteedUpdate etcd3" audit-id:2c0f0cd8-d841-4060-9f3d-7a9fc782a4fb,key:/minions/pause-182142,type:*core.Node (07-Nov-2022 18:24:42.834) (total time: 671ms):
	Trace[249348945]: ---"Txn call finished" err:context canceled 665ms (18:24:43.506)
	Trace[249348945]: [671.305ms] [671.305ms] END
	E1107 18:24:43.506157       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 274.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E1107 18:24:43.507608       1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I1107 18:24:43.508988       1 trace.go:205] Trace[937414451]: "Patch" url:/api/v1/nodes/pause-182142/status,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:2c0f0cd8-d841-4060-9f3d-7a9fc782a4fb,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:24:42.834) (total time: 674ms):
	Trace[937414451]: [674.5295ms] [674.5295ms] END
	E1107 18:24:43.509352       1 timeout.go:141] post-timeout activity - time-elapsed: 3.6133ms, PATCH "/api/v1/nodes/pause-182142/status" result: <nil>
	E1107 18:24:43.510308       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E1107 18:24:43.515827       1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E1107 18:24:43.517399       1 timeout.go:141] post-timeout activity - time-elapsed: 11.6074ms, POST "/api/v1/namespaces/default/events" result: <nil>
	
	* 
	* ==> kube-apiserver [7bf4bc1418d5] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:23:52.320591       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:23:52.428151       1 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:23:53.057919       1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [851a800aae35] <==
	* 
	* 
	* ==> kube-controller-manager [d747119d4a25] <==
	* I1107 18:24:35.518319       1 shared_informer.go:262] Caches are synced for endpoint
	I1107 18:24:35.518337       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1107 18:24:35.518676       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1107 18:24:35.518681       1 shared_informer.go:262] Caches are synced for disruption
	I1107 18:24:35.518829       1 shared_informer.go:262] Caches are synced for deployment
	I1107 18:24:35.518995       1 shared_informer.go:262] Caches are synced for attach detach
	I1107 18:24:35.519231       1 shared_informer.go:262] Caches are synced for node
	I1107 18:24:35.521033       1 range_allocator.go:166] Starting range CIDR allocator
	I1107 18:24:35.521085       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1107 18:24:35.521118       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1107 18:24:35.519558       1 shared_informer.go:262] Caches are synced for daemon sets
	I1107 18:24:35.528357       1 shared_informer.go:262] Caches are synced for taint
	I1107 18:24:35.528581       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1107 18:24:35.528616       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1107 18:24:35.528666       1 taint_manager.go:209] "Sending events to api server"
	W1107 18:24:35.528843       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-182142. Assuming now as a timestamp.
	I1107 18:24:35.529078       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1107 18:24:35.529171       1 event.go:294] "Event occurred" object="pause-182142" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-182142 event: Registered Node pause-182142 in Controller"
	I1107 18:24:35.530995       1 shared_informer.go:262] Caches are synced for persistent volume
	I1107 18:24:35.540605       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1107 18:24:35.626996       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:24:35.638396       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:24:35.946454       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:24:35.958971       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:24:35.959090       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [704cf9f28fe3] <==
	* I1107 18:24:15.728232       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 18:24:15.732303       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 18:24:15.735418       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 18:24:15.817738       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E1107 18:24:15.826963       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-182142": dial tcp 192.168.67.2:8443: connect: connection refused
	I1107 18:24:22.324348       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I1107 18:24:22.324526       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I1107 18:24:22.324575       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 18:24:22.718077       1 server_others.go:206] "Using iptables Proxier"
	I1107 18:24:22.718373       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 18:24:22.718400       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 18:24:22.718426       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 18:24:22.718460       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:24:22.718881       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:24:22.719449       1 server.go:661] "Version info" version="v1.25.3"
	I1107 18:24:22.719553       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 18:24:22.720824       1 config.go:444] "Starting node config controller"
	I1107 18:24:22.721031       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 18:24:22.721041       1 config.go:226] "Starting endpoint slice config controller"
	I1107 18:24:22.720891       1 config.go:317] "Starting service config controller"
	I1107 18:24:22.721369       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 18:24:22.721378       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 18:24:22.830764       1 shared_informer.go:262] Caches are synced for service config
	I1107 18:24:22.830880       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 18:24:22.830846       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [bf7a05217ed0] <==
	* E1107 18:23:39.853343       1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I1107 18:23:39.856925       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1107 18:23:39.859506       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 18:23:39.862717       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 18:23:39.865636       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 18:23:39.868277       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E1107 18:23:39.875796       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-182142": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 18:23:41.048436       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-182142": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [733b0d85b57b] <==
	* E1107 18:22:53.917868       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 18:22:53.918128       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 18:22:53.918266       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 18:22:54.008910       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 18:22:54.009042       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1107 18:22:54.036844       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 18:22:54.036999       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 18:22:54.117852       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 18:22:54.118007       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 18:22:54.285237       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 18:22:54.285416       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 18:22:54.316684       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 18:22:54.316831       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 18:22:54.323838       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 18:22:54.325001       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 18:22:54.337816       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 18:22:54.337973       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1107 18:22:56.019019       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 18:22:56.019343       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1107 18:23:01.523293       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 18:23:37.418953       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1107 18:23:37.419190       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1107 18:23:37.419387       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I1107 18:23:37.419580       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1107 18:23:37.420157       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [7b2ec2b1aa93] <==
	* I1107 18:24:15.118739       1 serving.go:348] Generated self-signed cert in-memory
	W1107 18:24:22.217614       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 18:24:22.217765       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 18:24:22.217783       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 18:24:22.217794       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 18:24:22.419964       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1107 18:24:22.420464       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 18:24:22.424807       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 18:24:22.425732       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 18:24:22.425858       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 18:24:22.425914       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 18:24:22.531171       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 18:22:17 UTC, end at Mon 2022-11-07 18:25:03 UTC. --
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.842502    6389 kubelet_node_status.go:73] "Successfully registered node" node="pause-182142"
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.928653    6389 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.929093    6389 status_manager.go:161] "Starting to sync pod status with apiserver"
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.929331    6389 kubelet.go:2010] "Starting kubelet main sync loop"
	Nov 07 18:24:41 pause-182142 kubelet[6389]: E1107 18:24:41.929475    6389 kubelet.go:2034] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.944017    6389 setters.go:545] "Node became not ready" node="pause-182142" condition={Type:Ready Status:False LastHeartbeatTime:2022-11-07 18:24:41.9439341 +0000 UTC m=+0.766258701 LastTransitionTime:2022-11-07 18:24:41.9439341 +0000 UTC m=+0.766258701 Reason:KubeletNotReady Message:[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]}
	Nov 07 18:24:42 pause-182142 kubelet[6389]: E1107 18:24:42.030671    6389 kubelet.go:2034] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: E1107 18:24:42.231739    6389 kubelet.go:2034] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.448312    6389 apiserver.go:52] "Watching apiserver"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: E1107 18:24:42.634482    6389 kubelet.go:2034] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.645479    6389 cpu_manager.go:213] "Starting CPU manager" policy="none"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.645673    6389 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.645835    6389 state_mem.go:36] "Initialized new in-memory state store"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.646317    6389 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.646513    6389 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.646538    6389 policy_none.go:49] "None policy: Start"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.659011    6389 memory_manager.go:168] "Starting memorymanager" policy="None"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.659132    6389 state_mem.go:35] "Initializing new in-memory state store"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.659343    6389 state_mem.go:75] "Updated machine memory state"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.662268    6389 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.663151    6389 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	Nov 07 18:24:43 pause-182142 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Nov 07 18:24:43 pause-182142 kubelet[6389]: I1107 18:24:43.397064    6389 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 07 18:24:43 pause-182142 systemd[1]: kubelet.service: Succeeded.
	Nov 07 18:24:43 pause-182142 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [5ee28a460839] <==
	* I1107 18:24:34.049779       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 18:24:34.079225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 18:24:34.079463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 18:24:34.091698       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 18:24:34.092098       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-182142_c8fb11ca-e266-48c2-8785-532891fe6ab6!
	I1107 18:24:34.121150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1dfff7be-9253-472b-811a-6e2ac2fa00b0", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-182142_c8fb11ca-e266-48c2-8785-532891fe6ab6 became leader
	I1107 18:24:34.292346       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-182142_c8fb11ca-e266-48c2-8785-532891fe6ab6!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 18:25:02.118966    6468 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-182142 -n pause-182142

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-182142 -n pause-182142: exit status 2 (1.7595157s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-182142" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-182142
helpers_test.go:235: (dbg) docker inspect pause-182142:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8",
	        "Created": "2022-11-07T18:22:15.2545351Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T18:22:16.2443546Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8/hosts",
	        "LogPath": "/var/lib/docker/containers/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8/8dab1000fa5c7d6a17e5f3e342b0b90b1127bfbb4be7f9dbecb432a88109f3b8-json.log",
	        "Name": "/pause-182142",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-182142:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-182142",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07a345318becdb73f6c81537a3396ae5d4a9c879beb4ceebf04d5237fefc312d-init/diff:/var/lib/docker/overlay2/5ba40928978efc1ee3b35421e2a49e4e2a7d59d61b07bb8e461b5416c8a7cee7/diff:/var/lib/docker/overlay2/67e02326f2fb9638b3c744df240d022783ccecb7d0e13e0d4028b0f8bf17e69d/diff:/var/lib/docker/overlay2/2df41d3bee4190176a765702135566ea66b1390e8b91dfa86b8de2bce135a93a/diff:/var/lib/docker/overlay2/3ec94dbaa89905250e2398ca72e3bb9ff5dccddd8b415085183015f908fee35f/diff:/var/lib/docker/overlay2/3ff2e3a3d014a61bdc0a08d62538ff8c84667c0284decf8ecda52f68283ff0fb/diff:/var/lib/docker/overlay2/bec12fe29cd5fb8e9a7e5bb928cb25b20213dd7883f37ea7dd0a8e3bc0351052/diff:/var/lib/docker/overlay2/21c29267c8a16c82c45149aee257177584b1ce7c75fa787decd6c03a640936f7/diff:/var/lib/docker/overlay2/5552452888ed9ac6a45e539159cccc1e649ef7ad0bc04a4418eebab44d92e666/diff:/var/lib/docker/overlay2/3f5659bfc1d27650ea46807074a281c02900176a5f42ac3ce1101e612aea49a4/diff:/var/lib/docker/overlay2/95ed14
d67ee43712c9773f372551bf224bbcbf05234904cb75bfe650e5a9b431/diff:/var/lib/docker/overlay2/c61bea1335a18e64dabe990546948a49a1e791d643b48037370421d0751659c3/diff:/var/lib/docker/overlay2/4bceff48ae8e97fbcd073948091f9c7dbeadc230b98de67471c5522b9c386672/diff:/var/lib/docker/overlay2/23bacba3c342644af413c4af4dd2d246c778f3794857f6249648a877a053a59c/diff:/var/lib/docker/overlay2/b52423693db548690f91d1cd1a682e7dcffed995839ad13f0c371c2d681d58ae/diff:/var/lib/docker/overlay2/78ed02992e8d5b101283c1328bd5aaa12d7e0ca041f267cc87df49ef21d9bb03/diff:/var/lib/docker/overlay2/46157251f5db6a6570ed965e54b6f9c571885b984df59133027ccf004684e35b/diff:/var/lib/docker/overlay2/a7138fb69aba5dad874e92c39963591ac31b8c00283be1cef1f97bb03e29e95b/diff:/var/lib/docker/overlay2/c758e4b48f926dc6128c8daee3fc24a31cf68d0c856315d42cd496a0dbdd8539/diff:/var/lib/docker/overlay2/177fe0e8ee94dbc81e32cb39d5d299febe5bdcc240161d4b1835668fe03b5209/diff:/var/lib/docker/overlay2/f079d80f0588e1138baa92eb5c6d7f1bd3b748adbba870d85b973e09f3ebf494/diff:/var/lib/d
ocker/overlay2/c3813cada301ad2ba06f263b5ccf3e0b01ae80626c1d9caa7145c8b44f41463e/diff:/var/lib/docker/overlay2/72b362c3acbe525943f481d496d0727bf0f806a59448acd97435a15c292fef7e/diff:/var/lib/docker/overlay2/f3dae2918bbd88ecf6fa92ce58b695b5b7c2da5701725c4de1346a5152bfb602/diff:/var/lib/docker/overlay2/a9aa7189cf37379174133f86b5cd20db821dffd303a69bb90d8b837ef9314cae/diff:/var/lib/docker/overlay2/f2580cf4053e61b8bea5cd979c14376e4cb354a10cabb06928d54c1685d717ad/diff:/var/lib/docker/overlay2/935a0de03d362bfbb94f9caed18a864b47c082fd03de4bfa5ea3296602ab831a/diff:/var/lib/docker/overlay2/3cff685fb531dd4d8712d453d4acd726381268d9ddcd0c57a932182872cbf384/diff:/var/lib/docker/overlay2/112b35fd6eb67f7dfac734ed32e36fb98e01f15bd9c239c2f80d0bf851060ea4/diff:/var/lib/docker/overlay2/01282a02b23965342a99a1d1cc886e98e3cdc825c6ca80b04373c4406c9aa4f3/diff:/var/lib/docker/overlay2/bd54f122cc195ba2f796884b001defe75facaad0c89ccc34a6f6465aaa917fe9/diff:/var/lib/docker/overlay2/20dfd6c01cb2b243e552c3e422dd7b551e0db65fb0c630c438801d475ad
f77a1/diff:/var/lib/docker/overlay2/411ec7d4646f3c8ed6c04c781054e871311645faa7de90212e5c5454192092fd/diff:/var/lib/docker/overlay2/bb233cf9945b014c96c4bcbef2e9ef2f1e040f65910db652eb424af82e93768d/diff:/var/lib/docker/overlay2/a6de3a7d987b965f42f8379040ffd401aad9d38f67ac126754e8d62b555407aa/diff:/var/lib/docker/overlay2/b2ce15147e01c2b1eff488a0aec2cdcf950484589bf948d4b1f3a8a876232d09/diff:/var/lib/docker/overlay2/8a119f66dd46b7cc5f5ba77598b3979bf10ddf84081ea4872ec2ce3375d41684/diff:/var/lib/docker/overlay2/b3c7202a41b63567d929a27b911caefdba403bae7ea5f11b89f717ecb1013955/diff:/var/lib/docker/overlay2/d87eb4edb251e5b57913be1bf6653b8ad0988f5aefaf73d12984c2b91801af17/diff:/var/lib/docker/overlay2/df756f877bb755e1124e9ccaa62bd29d76f04822f12787db45118fcba1de223d/diff:/var/lib/docker/overlay2/ba2334ebb657af4b27997ce445bfc2ce0f740fb6fe3edba5a315042fd325a7d3/diff:/var/lib/docker/overlay2/ba4ef7e8994716049d65e5b49db39352db8c77cd45684b9516c827f4114572cb/diff:/var/lib/docker/overlay2/3df6d706ee5529d758e5ed38fd5b49f5733ae7
45d03cb146ad24eb8be305a2a3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07a345318becdb73f6c81537a3396ae5d4a9c879beb4ceebf04d5237fefc312d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07a345318becdb73f6c81537a3396ae5d4a9c879beb4ceebf04d5237fefc312d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07a345318becdb73f6c81537a3396ae5d4a9c879beb4ceebf04d5237fefc312d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-182142",
	                "Source": "/var/lib/docker/volumes/pause-182142/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-182142",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-182142",
	                "name.minikube.sigs.k8s.io": "pause-182142",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f452013242388443d0ebd07ca1597ed574e3631fe558440a66a0c2a59fe5008",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59970"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59971"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59972"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9f4520132423",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-182142": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8dab1000fa5c",
	                        "pause-182142"
	                    ],
	                    "NetworkID": "1018b4386d98e45631bc5cc5c04e928ff460f9fb80c882423db17bf4c3825a53",
	                    "EndpointID": "f6181ff67f4d8f32d02a2e5adbbc907f1ce3d54d8a03065a30a352b38dda71c3",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-182142 -n pause-182142

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-182142 -n pause-182142: exit status 2 (1.6441797s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-182142 logs -n 25
E1107 18:25:13.852127    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-182142 logs -n 25: (17.3655965s)
helpers_test.go:252: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-181846      | running-upgrade-181846    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:23 GMT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| ssh     | -p NoKubernetes-181846 sudo    | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT |                     |
	|         | systemctl is-active --quiet    |                           |                   |         |                     |                     |
	|         | service kubelet                |                           |                   |         |                     |                     |
	| profile | list                           | minikube                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	| profile | list --output=json             | minikube                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	| stop    | -p NoKubernetes-181846         | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	| start   | -p NoKubernetes-181846         | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| ssh     | -p NoKubernetes-181846 sudo    | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT |                     |
	|         | systemctl is-active --quiet    |                           |                   |         |                     |                     |
	|         | service kubelet                |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-181846         | NoKubernetes-181846       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:22 GMT |
	| start   | -p stopped-upgrade-181846      | stopped-upgrade-181846    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:23 GMT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| start   | -p force-systemd-flag-182254   | force-systemd-flag-182254 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:22 GMT | 07 Nov 22 18:24 GMT |
	|         | --memory=2048 --force-systemd  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-181846      | running-upgrade-181846    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:23 GMT |
	| delete  | -p flannel-182327              | flannel-182327            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:23 GMT |
	| start   | -p pause-182142                | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:24 GMT |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| delete  | -p custom-flannel-182329       | custom-flannel-182329     | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:23 GMT |
	| start   | -p force-systemd-env-182331    | force-systemd-env-182331  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:25 GMT |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-181846      | stopped-upgrade-181846    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:23 GMT | 07 Nov 22 18:24 GMT |
	| start   | -p cert-expiration-182403      | cert-expiration-182403    | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m           |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| pause   | -p pause-182142                | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT | 07 Nov 22 18:24 GMT |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-182254      | force-systemd-flag-182254 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT | 07 Nov 22 18:24 GMT |
	|         | ssh docker info --format       |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-182254   | force-systemd-flag-182254 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT | 07 Nov 22 18:24 GMT |
	| unpause | -p pause-182142                | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT | 07 Nov 22 18:24 GMT |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| pause   | -p pause-182142                | pause-182142              | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| start   | -p docker-flags-182447         | docker-flags-182447       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:24 GMT |                     |
	|         | --cache-images=false           |                           |                   |         |                     |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=false                   |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR           |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT           |                           |                   |         |                     |                     |
	|         | --docker-opt=debug             |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=docker                |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-182331       | force-systemd-env-182331  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:25 GMT | 07 Nov 22 18:25 GMT |
	|         | ssh docker info --format       |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-182331    | force-systemd-env-182331  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:25 GMT |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 18:24:48
	Running on machine: minikube2
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 18:24:48.259813   10120 out.go:296] Setting OutFile to fd 1768 ...
	I1107 18:24:48.333835   10120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:24:48.333835   10120 out.go:309] Setting ErrFile to fd 1572...
	I1107 18:24:48.333835   10120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:24:48.363334   10120 out.go:303] Setting JSON to false
	I1107 18:24:48.366298   10120 start.go:116] hostinfo: {"hostname":"minikube2","uptime":10125,"bootTime":1667835363,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 18:24:48.366298   10120 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 18:24:48.371296   10120 out.go:177] * [docker-flags-182447] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 18:24:48.377280   10120 notify.go:220] Checking for updates...
	I1107 18:24:48.379293   10120 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:24:48.381294   10120 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 18:24:48.383285   10120 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 18:24:48.386290   10120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 18:24:44.507482    6492 main.go:134] libmachine: Using SSH client type: native
	I1107 18:24:44.508126    6492 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 60289 <nil> <nil>}
	I1107 18:24:44.508126    6492 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 18:24:47.696852    6492 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 18:24:44.225723000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 18:24:47.696852    6492 machine.go:91] provisioned docker machine in 6.4344555s
	I1107 18:24:47.696852    6492 client.go:171] LocalClient.Create took 41.4325832s
	I1107 18:24:47.696852    6492 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-182403" took 41.4325832s
	I1107 18:24:47.696852    6492 start.go:300] post-start starting for "cert-expiration-182403" (driver="docker")
	I1107 18:24:47.696852    6492 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 18:24:47.709849    6492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 18:24:47.716856    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:47.929701    6492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60289 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cert-expiration-182403\id_rsa Username:docker}
	I1107 18:24:48.084959    6492 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 18:24:48.094973    6492 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 18:24:48.094973    6492 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 18:24:48.094973    6492 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 18:24:48.094973    6492 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 18:24:48.094973    6492 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1107 18:24:48.094973    6492 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1107 18:24:48.095958    6492 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem -> 99482.pem in /etc/ssl/certs
	I1107 18:24:48.107948    6492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 18:24:48.128955    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem --> /etc/ssl/certs/99482.pem (1708 bytes)
	I1107 18:24:48.180957    6492 start.go:303] post-start completed in 484.1004ms
	I1107 18:24:48.190959    6492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-182403
	I1107 18:24:48.408301    6492 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\config.json ...
	I1107 18:24:48.432297    6492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 18:24:48.449313    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:48.661377    6492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60289 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cert-expiration-182403\id_rsa Username:docker}
	I1107 18:24:48.879372    6492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 18:24:48.890385    6492 start.go:128] duration metric: createHost completed in 42.6328843s
	I1107 18:24:48.890385    6492 start.go:83] releasing machines lock for "cert-expiration-182403", held for 42.6328843s
	I1107 18:24:48.898372    6492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-182403
	I1107 18:24:49.163902    6492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 18:24:49.172898    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:49.174887    6492 ssh_runner.go:195] Run: systemctl --version
	I1107 18:24:49.182900    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:48.389295   10120 config.go:180] Loaded profile config "cert-expiration-182403": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:24:48.390287   10120 config.go:180] Loaded profile config "force-systemd-env-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:24:48.390287   10120 config.go:180] Loaded profile config "pause-182142": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:24:48.390287   10120 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 18:24:48.740374   10120 docker.go:137] docker version: linux-20.10.20
	I1107 18:24:48.753371   10120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:24:49.399683   10120 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:24:48.8957443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:24:49.403686   10120 out.go:177] * Using the docker driver based on user configuration
	I1107 18:24:49.405682   10120 start.go:282] selected driver: docker
	I1107 18:24:49.405682   10120 start.go:808] validating driver "docker" against <nil>
	I1107 18:24:49.405682   10120 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 18:24:49.478686   10120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:24:50.111570   10120 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:24:49.6290577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:24:50.111570   10120 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 18:24:50.112569   10120 start_flags.go:896] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1107 18:24:50.115557   10120 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 18:24:50.117603   10120 cni.go:95] Creating CNI manager for ""
	I1107 18:24:50.117603   10120 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 18:24:50.117603   10120 start_flags.go:317] config:
	{Name:docker-flags-182447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:docker-flags-182447 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:24:50.120564   10120 out.go:177] * Starting control plane node docker-flags-182447 in cluster docker-flags-182447
	I1107 18:24:50.122560   10120 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 18:24:50.126571   10120 out.go:177] * Pulling base image ...
	I1107 18:24:50.128582   10120 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 18:24:50.128582   10120 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:24:50.128582   10120 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 18:24:50.128582   10120 cache.go:57] Caching tarball of preloaded images
	I1107 18:24:50.129557   10120 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 18:24:50.129557   10120 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 18:24:50.129557   10120 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-182447\config.json ...
	I1107 18:24:50.129557   10120 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-182447\config.json: {Name:mk3233ae12be8327c92c184749de567e306a5cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:50.362044   10120 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 18:24:50.362090   10120 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 18:24:50.362158   10120 cache.go:208] Successfully downloaded all kic artifacts
	I1107 18:24:50.362747   10120 start.go:364] acquiring machines lock for docker-flags-182447: {Name:mkf6efd588223edf9f31efdf57194255c6fcfb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 18:24:50.362747   10120 start.go:368] acquired machines lock for "docker-flags-182447" in 0s
	I1107 18:24:50.362747   10120 start.go:93] Provisioning new machine with config: &{Name:docker-flags-182447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:docker-flags-182447 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:24:50.363452   10120 start.go:125] createHost starting for "" (driver="docker")
	I1107 18:24:49.383683    6492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60289 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cert-expiration-182403\id_rsa Username:docker}
	I1107 18:24:49.398662    6492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60289 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cert-expiration-182403\id_rsa Username:docker}
	I1107 18:24:49.616679    6492 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 18:24:49.662990    6492 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 18:24:49.677597    6492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 18:24:49.703620    6492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 18:24:49.754587    6492 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 18:24:49.926570    6492 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 18:24:50.095255    6492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 18:24:50.291591    6492 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 18:24:50.989592    6492 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 18:24:51.184403    6492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 18:24:51.367388    6492 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 18:24:51.406060    6492 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 18:24:51.418060    6492 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 18:24:51.430041    6492 start.go:472] Will wait 60s for crictl version
	I1107 18:24:51.440056    6492 ssh_runner.go:195] Run: sudo crictl version
	I1107 18:24:51.733837    6492 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 18:24:51.744844    6492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 18:24:51.835113    6492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 18:24:50.367812   10120 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 18:24:50.367812   10120 start.go:159] libmachine.API.Create for "docker-flags-182447" (driver="docker")
	I1107 18:24:50.368375   10120 client.go:168] LocalClient.Create starting
	I1107 18:24:50.368967   10120 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1107 18:24:50.369263   10120 main.go:134] libmachine: Decoding PEM data...
	I1107 18:24:50.369316   10120 main.go:134] libmachine: Parsing certificate...
	I1107 18:24:50.369505   10120 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1107 18:24:50.369505   10120 main.go:134] libmachine: Decoding PEM data...
	I1107 18:24:50.369505   10120 main.go:134] libmachine: Parsing certificate...
	I1107 18:24:50.379989   10120 cli_runner.go:164] Run: docker network inspect docker-flags-182447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 18:24:50.597837   10120 cli_runner.go:211] docker network inspect docker-flags-182447 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 18:24:50.607957   10120 network_create.go:272] running [docker network inspect docker-flags-182447] to gather additional debugging logs...
	I1107 18:24:50.607957   10120 cli_runner.go:164] Run: docker network inspect docker-flags-182447
	W1107 18:24:50.787556   10120 cli_runner.go:211] docker network inspect docker-flags-182447 returned with exit code 1
	I1107 18:24:50.787556   10120 network_create.go:275] error running [docker network inspect docker-flags-182447]: docker network inspect docker-flags-182447: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-182447
	I1107 18:24:50.787556   10120 network_create.go:277] output of [docker network inspect docker-flags-182447]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-182447
	
	** /stderr **
	I1107 18:24:50.795597   10120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 18:24:51.017575   10120 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007224f8] misses:0}
	I1107 18:24:51.017575   10120 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:24:51.017575   10120 network_create.go:115] attempt to create docker network docker-flags-182447 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 18:24:51.025573   10120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-182447 docker-flags-182447
	W1107 18:24:51.230708   10120 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-182447 docker-flags-182447 returned with exit code 1
	W1107 18:24:51.230708   10120 network_create.go:107] failed to create docker network docker-flags-182447 192.168.49.0/24, will retry: subnet is taken
	I1107 18:24:51.251695   10120 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007224f8] amended:false}} dirty:map[] misses:0}
	I1107 18:24:51.251695   10120 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:24:51.273556   10120 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007224f8] amended:true}} dirty:map[192.168.49.0:0xc0007224f8 192.168.58.0:0xc000722590] misses:0}
	I1107 18:24:51.273556   10120 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:24:51.273556   10120 network_create.go:115] attempt to create docker network docker-flags-182447 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 18:24:51.285391   10120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-182447 docker-flags-182447
	W1107 18:24:51.468062   10120 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-182447 docker-flags-182447 returned with exit code 1
	W1107 18:24:51.468062   10120 network_create.go:107] failed to create docker network docker-flags-182447 192.168.58.0/24, will retry: subnet is taken
	I1107 18:24:51.491050   10120 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007224f8] amended:true}} dirty:map[192.168.49.0:0xc0007224f8 192.168.58.0:0xc000722590] misses:1}
	I1107 18:24:51.491050   10120 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:24:51.517351   10120 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007224f8] amended:true}} dirty:map[192.168.49.0:0xc0007224f8 192.168.58.0:0xc000722590 192.168.67.0:0xc000722628] misses:1}
	I1107 18:24:51.517351   10120 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:24:51.517351   10120 network_create.go:115] attempt to create docker network docker-flags-182447 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 18:24:51.527185   10120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-182447 docker-flags-182447
	W1107 18:24:51.737841   10120 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-182447 docker-flags-182447 returned with exit code 1
	W1107 18:24:51.737841   10120 network_create.go:107] failed to create docker network docker-flags-182447 192.168.67.0/24, will retry: subnet is taken
	I1107 18:24:51.766839   10120 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007224f8] amended:true}} dirty:map[192.168.49.0:0xc0007224f8 192.168.58.0:0xc000722590 192.168.67.0:0xc000722628] misses:2}
	I1107 18:24:51.766839   10120 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:24:51.787796   10120 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007224f8] amended:true}} dirty:map[192.168.49.0:0xc0007224f8 192.168.58.0:0xc000722590 192.168.67.0:0xc000722628 192.168.76.0:0xc000722288] misses:2}
	I1107 18:24:51.787882   10120 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:24:51.787882   10120 network_create.go:115] attempt to create docker network docker-flags-182447 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1107 18:24:51.798416   10120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-182447 docker-flags-182447
	I1107 18:24:52.109961   10120 network_create.go:99] docker network docker-flags-182447 192.168.76.0/24 created
	I1107 18:24:52.109961   10120 kic.go:106] calculated static IP "192.168.76.2" for the "docker-flags-182447" container
	I1107 18:24:52.131938   10120 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 18:24:52.337147   10120 cli_runner.go:164] Run: docker volume create docker-flags-182447 --label name.minikube.sigs.k8s.io=docker-flags-182447 --label created_by.minikube.sigs.k8s.io=true
	I1107 18:24:52.581376   10120 oci.go:103] Successfully created a docker volume docker-flags-182447
	I1107 18:24:52.589195   10120 cli_runner.go:164] Run: docker run --rm --name docker-flags-182447-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-182447 --entrypoint /usr/bin/test -v docker-flags-182447:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 18:24:51.925901    6492 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 18:24:51.932921    6492 cli_runner.go:164] Run: docker exec -t cert-expiration-182403 dig +short host.docker.internal
	I1107 18:24:52.314133    6492 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 18:24:52.324134    6492 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 18:24:52.335134    6492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 18:24:52.389059    6492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-182403
	I1107 18:24:52.612630    6492 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:24:52.623424    6492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 18:24:52.682655    6492 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 18:24:52.682655    6492 docker.go:543] Images already preloaded, skipping extraction
	I1107 18:24:52.690640    6492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 18:24:52.764580    6492 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 18:24:52.764580    6492 cache_images.go:84] Images are preloaded, skipping loading
	I1107 18:24:52.772583    6492 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 18:24:52.943765    6492 cni.go:95] Creating CNI manager for ""
	I1107 18:24:52.943765    6492 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 18:24:52.943765    6492 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 18:24:52.943765    6492 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-182403 NodeName:cert-expiration-182403 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 18:24:52.944315    6492 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cert-expiration-182403"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 18:24:52.944422    6492 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cert-expiration-182403 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:cert-expiration-182403 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 18:24:52.960468    6492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 18:24:52.990674    6492 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 18:24:53.002083    6492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 18:24:53.027678    6492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I1107 18:24:53.065569    6492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 18:24:53.110117    6492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2039 bytes)
	I1107 18:24:53.162947    6492 ssh_runner.go:195] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
	I1107 18:24:53.172964    6492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 18:24:53.199733    6492 certs.go:54] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403 for IP: 172.17.0.2
	I1107 18:24:53.200313    6492 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I1107 18:24:53.200313    6492 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I1107 18:24:53.201072    6492 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\client.key
	I1107 18:24:53.201166    6492 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\client.crt with IP's: []
	I1107 18:24:53.291497    6492 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\client.crt ...
	I1107 18:24:53.291497    6492 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\client.crt: {Name:mk9d05e0bec8df9c7748c375c44ab5bc283e409c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:53.292475    6492 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\client.key ...
	I1107 18:24:53.292475    6492 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\client.key: {Name:mk0af97dd94326be1556469ead208c06130f4eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:53.293499    6492 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.key.7b749c5f
	I1107 18:24:53.293499    6492 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 18:24:53.898151    6492 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.crt.7b749c5f ...
	I1107 18:24:53.898151    6492 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.crt.7b749c5f: {Name:mkc46b9904e711cc89f554efb6456cafb71d850b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:53.900113    6492 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.key.7b749c5f ...
	I1107 18:24:53.900206    6492 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.key.7b749c5f: {Name:mkff9ada20c3ea13cb2c586ce1dc0231084c3136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:53.902934    6492 certs.go:320] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.crt.7b749c5f -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.crt
	I1107 18:24:53.908531    6492 certs.go:324] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.key.7b749c5f -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.key
	I1107 18:24:53.909534    6492 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\proxy-client.key
	I1107 18:24:53.909534    6492 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\proxy-client.crt with IP's: []
	I1107 18:24:54.132776    6492 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\proxy-client.crt ...
	I1107 18:24:54.132776    6492 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\proxy-client.crt: {Name:mkc30248ed936909476da26a36d260e903abbbec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:54.133774    6492 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\proxy-client.key ...
	I1107 18:24:54.133774    6492 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\proxy-client.key: {Name:mkef741d10b53180f4785422f9ff0b6093edce6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:54.141775    6492 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948.pem (1338 bytes)
	W1107 18:24:54.141775    6492 certs.go:384] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948_empty.pem, impossibly tiny 0 bytes
	I1107 18:24:54.141775    6492 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1107 18:24:54.141775    6492 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1107 18:24:54.141775    6492 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1107 18:24:54.141775    6492 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1107 18:24:54.142786    6492 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem (1708 bytes)
	I1107 18:24:54.143783    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 18:24:54.237831    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 18:24:54.467171   10120 cli_runner.go:217] Completed: docker run --rm --name docker-flags-182447-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-182447 --entrypoint /usr/bin/test -v docker-flags-182447:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib: (1.8779563s)
	I1107 18:24:54.467171   10120 oci.go:107] Successfully prepared a docker volume docker-flags-182447
	I1107 18:24:54.467171   10120 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:24:54.467171   10120 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 18:24:54.474182   10120 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-182447:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 18:24:58.637129    1844 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 18:24:58.637129    1844 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 18:24:58.637129    1844 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 18:24:58.637860    1844 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 18:24:58.638264    1844 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 18:24:58.638552    1844 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 18:24:58.641268    1844 out.go:204]   - Generating certificates and keys ...
	I1107 18:24:58.641608    1844 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 18:24:58.641802    1844 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 18:24:58.641802    1844 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 18:24:58.641802    1844 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 18:24:58.641802    1844 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 18:24:58.641802    1844 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 18:24:58.641802    1844 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 18:24:58.642690    1844 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-182331 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1107 18:24:58.642868    1844 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 18:24:58.642923    1844 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-182331 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1107 18:24:58.643618    1844 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 18:24:58.643899    1844 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 18:24:58.644174    1844 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 18:24:58.644452    1844 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 18:24:58.644611    1844 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 18:24:58.644858    1844 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 18:24:58.645117    1844 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 18:24:58.645276    1844 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 18:24:58.645688    1844 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 18:24:58.646022    1844 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 18:24:58.646303    1844 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 18:24:58.646440    1844 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 18:24:58.648246    1844 out.go:204]   - Booting up control plane ...
	I1107 18:24:58.648246    1844 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 18:24:58.649242    1844 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 18:24:58.649242    1844 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 18:24:58.649242    1844 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 18:24:58.650191    1844 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 18:24:58.650191    1844 kubeadm.go:317] [apiclient] All control plane components are healthy after 23.507076 seconds
	I1107 18:24:58.650641    1844 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 18:24:58.650641    1844 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 18:24:58.651245    1844 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 18:24:58.651245    1844 kubeadm.go:317] [mark-control-plane] Marking the node force-systemd-env-182331 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 18:24:58.651245    1844 kubeadm.go:317] [bootstrap-token] Using token: 5l960i.p5ralx2ph1fxogvv
	I1107 18:24:54.297530    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 18:24:54.363143    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cert-expiration-182403\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 18:24:54.414176    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 18:24:54.468164    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 18:24:54.521162    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 18:24:54.575891    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 18:24:54.649023    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948.pem --> /usr/share/ca-certificates/9948.pem (1338 bytes)
	I1107 18:24:54.705514    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem --> /usr/share/ca-certificates/99482.pem (1708 bytes)
	I1107 18:24:54.766980    6492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 18:24:54.816979    6492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 18:24:54.869576    6492 ssh_runner.go:195] Run: openssl version
	I1107 18:24:54.900570    6492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99482.pem && ln -fs /usr/share/ca-certificates/99482.pem /etc/ssl/certs/99482.pem"
	I1107 18:24:54.938206    6492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99482.pem
	I1107 18:24:54.954238    6492 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 17:01 /usr/share/ca-certificates/99482.pem
	I1107 18:24:54.964279    6492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99482.pem
	I1107 18:24:54.989284    6492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 18:24:55.026286    6492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 18:24:55.064279    6492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:24:55.088851    6492 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:24:55.098825    6492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:24:55.119821    6492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 18:24:55.156586    6492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9948.pem && ln -fs /usr/share/ca-certificates/9948.pem /etc/ssl/certs/9948.pem"
	I1107 18:24:55.187905    6492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9948.pem
	I1107 18:24:55.200340    6492 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 17:01 /usr/share/ca-certificates/9948.pem
	I1107 18:24:55.214369    6492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9948.pem
	I1107 18:24:55.241845    6492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9948.pem /etc/ssl/certs/51391683.0"
	I1107 18:24:55.265863    6492 kubeadm.go:396] StartCluster: {Name:cert-expiration-182403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cert-expiration-182403 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:24:55.276234    6492 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 18:24:55.340920    6492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 18:24:55.475193    6492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 18:24:55.509711    6492 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 18:24:55.529009    6492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 18:24:55.562009    6492 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 18:24:55.562009    6492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 18:24:55.667282    6492 kubeadm.go:317] W1107 18:24:55.675296    1193 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 18:24:55.765043    6492 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 18:24:55.966545    6492 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 18:24:58.653486    1844 out.go:204]   - Configuring RBAC rules ...
	I1107 18:24:58.654144    1844 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 18:24:58.654344    1844 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 18:24:58.654633    1844 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 18:24:58.654937    1844 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 18:24:58.655172    1844 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 18:24:58.655172    1844 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 18:24:58.655783    1844 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 18:24:58.655783    1844 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1107 18:24:58.655783    1844 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1107 18:24:58.655783    1844 kubeadm.go:317] 
	I1107 18:24:58.656348    1844 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1107 18:24:58.656416    1844 kubeadm.go:317] 
	I1107 18:24:58.656416    1844 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1107 18:24:58.656416    1844 kubeadm.go:317] 
	I1107 18:24:58.656416    1844 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1107 18:24:58.656416    1844 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 18:24:58.656416    1844 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 18:24:58.656416    1844 kubeadm.go:317] 
	I1107 18:24:58.657065    1844 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1107 18:24:58.657104    1844 kubeadm.go:317] 
	I1107 18:24:58.657104    1844 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 18:24:58.657104    1844 kubeadm.go:317] 
	I1107 18:24:58.657104    1844 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1107 18:24:58.657104    1844 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 18:24:58.657104    1844 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 18:24:58.657104    1844 kubeadm.go:317] 
	I1107 18:24:58.657104    1844 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 18:24:58.658045    1844 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1107 18:24:58.658045    1844 kubeadm.go:317] 
	I1107 18:24:58.658045    1844 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 5l960i.p5ralx2ph1fxogvv \
	I1107 18:24:58.658045    1844 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:5ee7b05911e14fac42df88d6576770cfc35fa970444b7ab659b27324c22502ae \
	I1107 18:24:58.658629    1844 kubeadm.go:317] 	--control-plane 
	I1107 18:24:58.658629    1844 kubeadm.go:317] 
	I1107 18:24:58.658844    1844 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1107 18:24:58.658844    1844 kubeadm.go:317] 
	I1107 18:24:58.659042    1844 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 5l960i.p5ralx2ph1fxogvv \
	I1107 18:24:58.659456    1844 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:5ee7b05911e14fac42df88d6576770cfc35fa970444b7ab659b27324c22502ae 
	I1107 18:24:58.659557    1844 cni.go:95] Creating CNI manager for ""
	I1107 18:24:58.659557    1844 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 18:24:58.659681    1844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 18:24:58.673026    1844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:24:58.674640    1844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262 minikube.k8s.io/name=force-systemd-env-182331 minikube.k8s.io/updated_at=2022_11_07T18_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:24:59.523728    1844 kubeadm.go:1067] duration metric: took 864.0378ms to wait for elevateKubeSystemPrivileges.
	I1107 18:24:59.523728    1844 ops.go:34] apiserver oom_adj: -16
	I1107 18:24:59.743555    1844 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262 minikube.k8s.io/name=force-systemd-env-182331 minikube.k8s.io/updated_at=2022_11_07T18_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.0689032s)
	I1107 18:24:59.743555    1844 kubeadm.go:398] StartCluster complete in 32.9549849s
	I1107 18:24:59.743555    1844 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:59.744545    1844 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:24:59.746540    1844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:24:59.756551    1844 kapi.go:59] client config for force-systemd-env-182331: &rest.Config{Host:"https://127.0.0.1:60228", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\force-systemd-env-182331\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\force-systemd-env-182331\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 18:24:59.757550    1844 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 18:25:00.464383    1844 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "force-systemd-env-182331" rescaled to 1
	I1107 18:25:00.464602    1844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 18:25:00.464602    1844 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:25:00.471054    1844 out.go:177] * Verifying Kubernetes components...
	I1107 18:25:00.464638    1844 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I1107 18:25:00.465272    1844 config.go:180] Loaded profile config "force-systemd-env-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:25:00.471054    1844 addons.go:65] Setting storage-provisioner=true in profile "force-systemd-env-182331"
	I1107 18:25:00.471054    1844 addons.go:65] Setting default-storageclass=true in profile "force-systemd-env-182331"
	I1107 18:25:00.474020    1844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-182331"
	I1107 18:25:00.473902    1844 addons.go:227] Setting addon storage-provisioner=true in "force-systemd-env-182331"
	W1107 18:25:00.474061    1844 addons.go:236] addon storage-provisioner should already be in state true
	I1107 18:25:00.474061    1844 host.go:66] Checking if "force-systemd-env-182331" exists ...
	I1107 18:25:00.491363    1844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 18:25:00.499587    1844 cli_runner.go:164] Run: docker container inspect force-systemd-env-182331 --format={{.State.Status}}
	I1107 18:25:00.502610    1844 cli_runner.go:164] Run: docker container inspect force-systemd-env-182331 --format={{.State.Status}}
	I1107 18:25:00.668292    1844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 18:25:00.675308    1844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-182331
	I1107 18:25:00.752561    1844 kapi.go:59] client config for force-systemd-env-182331: &rest.Config{Host:"https://127.0.0.1:60228", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\force-systemd-env-182331\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\force-systemd-env-182331\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 18:25:00.783390    1844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 18:25:00.786425    1844 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 18:25:00.786503    1844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 18:25:00.796658    1844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-182331
	I1107 18:25:00.848043    1844 addons.go:227] Setting addon default-storageclass=true in "force-systemd-env-182331"
	W1107 18:25:00.848043    1844 addons.go:236] addon default-storageclass should already be in state true
	I1107 18:25:00.848043    1844 host.go:66] Checking if "force-systemd-env-182331" exists ...
	I1107 18:25:00.873992    1844 cli_runner.go:164] Run: docker container inspect force-systemd-env-182331 --format={{.State.Status}}
	I1107 18:25:00.923108    1844 kapi.go:59] client config for force-systemd-env-182331: &rest.Config{Host:"https://127.0.0.1:60228", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\force-systemd-env-182331\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\force-systemd-env-182331\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 18:25:00.925008    1844 api_server.go:51] waiting for apiserver process to appear ...
	I1107 18:25:00.945069    1844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 18:25:01.016097    1844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60224 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-182331\id_rsa Username:docker}
	I1107 18:25:01.112099    1844 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 18:25:01.112099    1844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 18:25:01.121088    1844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-182331
	I1107 18:25:01.272039    1844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 18:25:01.347543    1844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60224 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-182331\id_rsa Username:docker}
	I1107 18:25:01.863920    1844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 18:25:03.329643    1844 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.3845488s)
	I1107 18:25:03.329643    1844 api_server.go:71] duration metric: took 2.8649743s to wait for apiserver process to appear ...
	I1107 18:25:03.329643    1844 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.6602975s)
	I1107 18:25:03.329643    1844 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1107 18:25:03.329643    1844 api_server.go:87] waiting for apiserver healthz status ...
	I1107 18:25:03.329643    1844 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60228/healthz ...
	I1107 18:25:03.356621    1844 api_server.go:278] https://127.0.0.1:60228/healthz returned 200:
	ok
	I1107 18:25:03.361619    1844 api_server.go:140] control plane version: v1.25.3
	I1107 18:25:03.361619    1844 api_server.go:130] duration metric: took 31.9761ms to wait for apiserver health ...
	I1107 18:25:03.361619    1844 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 18:25:03.374623    1844 system_pods.go:59] 4 kube-system pods found
	I1107 18:25:03.374623    1844 system_pods.go:61] "etcd-force-systemd-env-182331" [4fedc72b-97e9-4b8b-999e-a531fc8212f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 18:25:03.374623    1844 system_pods.go:61] "kube-apiserver-force-systemd-env-182331" [23293bcf-bb16-458c-8b4a-da2a207c268b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 18:25:03.374623    1844 system_pods.go:61] "kube-controller-manager-force-systemd-env-182331" [c39d81ec-129b-4609-aa7e-afd669dc383c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 18:25:03.374623    1844 system_pods.go:61] "kube-scheduler-force-systemd-env-182331" [f6ed4728-a48c-4e84-8b02-8a08628900d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 18:25:03.374623    1844 system_pods.go:74] duration metric: took 13.0033ms to wait for pod list to return data ...
	I1107 18:25:03.374623    1844 kubeadm.go:573] duration metric: took 2.9099537s to wait for : map[apiserver:true system_pods:true] ...
	I1107 18:25:03.374623    1844 node_conditions.go:102] verifying NodePressure condition ...
	I1107 18:25:03.426635    1844 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1107 18:25:03.426635    1844 node_conditions.go:123] node cpu capacity is 16
	I1107 18:25:03.426635    1844 node_conditions.go:105] duration metric: took 52.0114ms to run NodePressure ...
	I1107 18:25:03.426635    1844 start.go:217] waiting for startup goroutines ...
	I1107 18:25:03.624121    1844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.7601821s)
	I1107 18:25:03.624360    1844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.3520569s)
	I1107 18:25:03.631822    1844 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1107 18:25:03.634357    1844 addons.go:488] enableAddons completed in 3.1696853s
	I1107 18:25:03.647350    1844 ssh_runner.go:195] Run: rm -f paused
	I1107 18:25:03.955152    1844 start.go:506] kubectl: 1.18.2, cluster: 1.25.3 (minor skew: 7)
	I1107 18:25:03.957142    1844 out.go:177] 
	W1107 18:25:03.959159    1844 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.25.3.
	I1107 18:25:03.963144    1844 out.go:177]   - Want kubectl v1.25.3? Try 'minikube kubectl -- get pods -A'
	I1107 18:25:03.967152    1844 out.go:177] * Done! kubectl is now configured to use "force-systemd-env-182331" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 18:22:17 UTC, end at Mon 2022-11-07 18:25:09 UTC. --
	Nov 07 18:23:43 pause-182142 dockerd[4189]: time="2022-11-07T18:23:43.325352400Z" level=info msg="Loading containers: start."
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.121347400Z" level=info msg="ignoring event" container=851a800aae3575a79623bc91882895a7d9fe1f06aef9b21f78a16f4e3bf9169d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.121653700Z" level=info msg="ignoring event" container=bf7a05217ed04b82f3fd1267805f11862baab37cc54fd304cd7d1b70447080cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.220953300Z" level=info msg="ignoring event" container=0ff9f33858c4c6c027328bfc9b028a790542ded89144be30c71ae89f8a4eb3ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.222991600Z" level=info msg="ignoring event" container=67c6693b310721e9765185a485c166be849334dc3839615decdc22ec602f72eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.223036300Z" level=info msg="ignoring event" container=a091e583621f94f538316eb23a541e918ffcaa794e99cf352841ddfc91cde8ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:44 pause-182142 dockerd[4189]: time="2022-11-07T18:23:44.223069700Z" level=info msg="ignoring event" container=d38f5973e26926fd8594dfdd0060ab415b61f7214b498830c2b5a3be95eb940d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:23:53 pause-182142 dockerd[4189]: time="2022-11-07T18:23:53.821475700Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=7bf4bc1418d534e8160cbb45b2a03d9aeaad8875733b924e8763f292ae815ecd
	Nov 07 18:24:00 pause-182142 dockerd[4189]: time="2022-11-07T18:24:00.266583900Z" level=info msg="ignoring event" container=7bf4bc1418d534e8160cbb45b2a03d9aeaad8875733b924e8763f292ae815ecd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.150898200Z" level=info msg="Removing stale sandbox 4f5e2db58e26505155192cac70dc335a725d6a563270e429afa405d8fa58197d (0ff9f33858c4c6c027328bfc9b028a790542ded89144be30c71ae89f8a4eb3ad)"
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.158012700Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 43298edcaf6f2c60ac4e071bb8e4b28578c935008e83d0cff345e3f27983aa7f 9603d7a4c936ae87cd4ae4a91cf7499c655d9672190cd054812c80b0e6b38fcc], retrying...."
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.410625300Z" level=info msg="Removing stale sandbox 9aeb9f0fe746e66dbaefcb26b7f670857bd77aa4c742fe73312e2fe118d9b340 (d38f5973e26926fd8594dfdd0060ab415b61f7214b498830c2b5a3be95eb940d)"
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.424506800Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 43298edcaf6f2c60ac4e071bb8e4b28578c935008e83d0cff345e3f27983aa7f a125cd2344a61e8a9d43a3767278785f6e90d8fbbc8a0808c6135f2287d76f5c], retrying...."
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.693052800Z" level=info msg="Removing stale sandbox da3270f618d795d26620543bc14843f67a9f4d1a7be07872beb956dbe553462f (a091e583621f94f538316eb23a541e918ffcaa794e99cf352841ddfc91cde8ff)"
	Nov 07 18:24:01 pause-182142 dockerd[4189]: time="2022-11-07T18:24:01.972661400Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 43298edcaf6f2c60ac4e071bb8e4b28578c935008e83d0cff345e3f27983aa7f c789841f4bf205117a7b766e19e75294711c7c21a32c5e9c5cce441924754c43], retrying...."
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.137863500Z" level=info msg="Removing stale sandbox ecb8f973e730f79435b0a9470976153775800cbc9aad1d31e1051b74ec7a7724 (67c6693b310721e9765185a485c166be849334dc3839615decdc22ec602f72eb)"
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.151625100Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 43298edcaf6f2c60ac4e071bb8e4b28578c935008e83d0cff345e3f27983aa7f 3b0a1e9a297a039a99f7f61713a54ea08710bf93a22a9225a60fe3fbf9fc3db5], retrying...."
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.293828000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.431484800Z" level=info msg="Loading containers: done."
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.522088200Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.522308900Z" level=info msg="Daemon has completed initialization"
	Nov 07 18:24:07 pause-182142 systemd[1]: Started Docker Application Container Engine.
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.587136000Z" level=info msg="API listen on [::]:2376"
	Nov 07 18:24:07 pause-182142 dockerd[4189]: time="2022-11-07T18:24:07.595830900Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 18:24:44 pause-182142 dockerd[4189]: time="2022-11-07T18:24:44.255992300Z" level=error msg="Handler for POST /v1.41/containers/e26886dd54b5/pause returned error: Cannot pause container e26886dd54b5ad440ef84a02f910535ca78d4b3867ee8ee5b330072871b2da89: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	5ee28a4608398       6e38f40d628db       38 seconds ago       Running             storage-provisioner       0                   6ea919f924e35
	704cf9f28fe3a       beaaf00edd38a       56 seconds ago       Running             kube-proxy                2                   4a1bbbda88fad
	004295df2ac3e       0346dbd74bcb9       57 seconds ago       Running             kube-apiserver            2                   f5c7a1c743618
	36d3f2cad4b50       5185b96f0becf       About a minute ago   Running             coredns                   1                   5f96bae2a07ac
	d747119d4a252       6039992312758       About a minute ago   Running             kube-controller-manager   2                   f81ca5df5d9bc
	7b2ec2b1aa93b       6d23ec0e8b87e       About a minute ago   Running             kube-scheduler            1                   e436e661c2403
	e26886dd54b5a       a8a176a5d5d69       About a minute ago   Running             etcd                      1                   173026e317b76
	851a800aae357       6039992312758       About a minute ago   Exited              kube-controller-manager   1                   67c6693b31072
	7bf4bc1418d53       0346dbd74bcb9       About a minute ago   Exited              kube-apiserver            1                   0ff9f33858c4c
	bf7a05217ed04       beaaf00edd38a       About a minute ago   Exited              kube-proxy                1                   a091e583621f9
	3866e1d50d618       5185b96f0becf       About a minute ago   Exited              coredns                   0                   c64fe72b7d58d
	733b0d85b57b0       6d23ec0e8b87e       2 minutes ago        Exited              kube-scheduler            0                   f2bac5082004e
	07e066724ef96       a8a176a5d5d69       2 minutes ago        Exited              etcd                      0                   0cd768ca00256
	
	* 
	* ==> coredns [36d3f2cad4b5] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> coredns [3866e1d50d61] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Nov 7 17:55] WSL2: Performing memory compaction.
	[Nov 7 17:56] WSL2: Performing memory compaction.
	[Nov 7 17:57] WSL2: Performing memory compaction.
	[Nov 7 17:59] WSL2: Performing memory compaction.
	[Nov 7 18:00] WSL2: Performing memory compaction.
	[Nov 7 18:01] WSL2: Performing memory compaction.
	[Nov 7 18:03] WSL2: Performing memory compaction.
	[Nov 7 18:04] WSL2: Performing memory compaction.
	[Nov 7 18:05] WSL2: Performing memory compaction.
	[Nov 7 18:06] WSL2: Performing memory compaction.
	[Nov 7 18:07] WSL2: Performing memory compaction.
	[Nov 7 18:08] WSL2: Performing memory compaction.
	[Nov 7 18:10] WSL2: Performing memory compaction.
	[Nov 7 18:11] WSL2: Performing memory compaction.
	[Nov 7 18:12] WSL2: Performing memory compaction.
	[Nov 7 18:13] WSL2: Performing memory compaction.
	[Nov 7 18:15] WSL2: Performing memory compaction.
	[Nov 7 18:16] WSL2: Performing memory compaction.
	[Nov 7 18:17] WSL2: Performing memory compaction.
	[Nov 7 18:18] WSL2: Performing memory compaction.
	[Nov 7 18:19] WSL2: Performing memory compaction.
	[Nov 7 18:20] process 'docker/tmp/qemu-check426843351/check' started with executable stack
	[Nov 7 18:21] WSL2: Performing memory compaction.
	[Nov 7 18:23] WSL2: Performing memory compaction.
	[Nov 7 18:24] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [07e066724ef9] <==
	* WARNING: 2022/11/07 18:23:26 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-11-07T18:23:27.340Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.3886524s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:23:27.340Z","caller":"traceutil/trace.go:171","msg":"trace[884282194] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:398; }","duration":"3.3889682s","start":"2022-11-07T18:23:23.951Z","end":"2022-11-07T18:23:27.340Z","steps":["trace[884282194] 'range keys from in-memory index tree'  (duration: 3.3886208s)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:23:27.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.0554101s","expected-duration":"100ms","prefix":"","request":"header:<ID:2289944428977903656 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-565d847f94-5kjqw.1725608e3de9d6e0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-565d847f94-5kjqw.1725608e3de9d6e0\" value_size:663 lease:2289944428977903179 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-11-07T18:23:27.341Z","caller":"traceutil/trace.go:171","msg":"trace[258053472] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:411; }","duration":"7.0049861s","start":"2022-11-07T18:23:20.336Z","end":"2022-11-07T18:23:27.341Z","steps":["trace[258053472] 'read index received'  (duration: 3.9486418s)","trace[258053472] 'applied index is now lower than readState.Index'  (duration: 3.0563403s)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T18:23:27.341Z","caller":"traceutil/trace.go:171","msg":"trace[1600712388] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"6.2874973s","start":"2022-11-07T18:23:21.054Z","end":"2022-11-07T18:23:27.341Z","steps":["trace[1600712388] 'process raft request'  (duration: 3.2314658s)","trace[1600712388] 'compare'  (duration: 3.054117s)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:23:27.342Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:23:21.054Z","time spent":"6.287582s","remote":"127.0.0.1:37120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":751,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-565d847f94-5kjqw.1725608e3de9d6e0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-565d847f94-5kjqw.1725608e3de9d6e0\" value_size:663 lease:2289944428977903179 >> failure:<>"}
	{"level":"warn","ts":"2022-11-07T18:23:27.342Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"7.0053359s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-182142\" ","response":"range_response_count:1 size:4548"}
	{"level":"info","ts":"2022-11-07T18:23:27.342Z","caller":"traceutil/trace.go:171","msg":"trace[1338312172] range","detail":"{range_begin:/registry/minions/pause-182142; range_end:; response_count:1; response_revision:399; }","duration":"7.0054294s","start":"2022-11-07T18:23:20.336Z","end":"2022-11-07T18:23:27.342Z","steps":["trace[1338312172] 'agreement among raft nodes before linearized reading'  (duration: 7.0052822s)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:23:27.342Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:23:20.336Z","time spent":"7.0055702s","remote":"127.0.0.1:37172","response type":"/etcdserverpb.KV/Range","request count":0,"request size":32,"response count":1,"response size":4572,"request content":"key:\"/registry/minions/pause-182142\" "}
	{"level":"warn","ts":"2022-11-07T18:23:27.351Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"3.0409771s","expected-duration":"1s"}
	{"level":"warn","ts":"2022-11-07T18:23:27.352Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"973.117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"warn","ts":"2022-11-07T18:23:27.352Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"793.5912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:23:27.353Z","caller":"traceutil/trace.go:171","msg":"trace[782749255] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:399; }","duration":"973.3128ms","start":"2022-11-07T18:23:26.379Z","end":"2022-11-07T18:23:27.353Z","steps":["trace[782749255] 'agreement among raft nodes before linearized reading'  (duration: 972.9107ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:23:27.353Z","caller":"traceutil/trace.go:171","msg":"trace[1461185123] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:399; }","duration":"793.9802ms","start":"2022-11-07T18:23:26.559Z","end":"2022-11-07T18:23:27.353Z","steps":["trace[1461185123] 'agreement among raft nodes before linearized reading'  (duration: 793.5552ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:23:27.353Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:23:26.379Z","time spent":"973.4656ms","remote":"127.0.0.1:37164","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":365,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-11-07T18:23:27.353Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:23:26.559Z","time spent":"794.0548ms","remote":"127.0.0.1:37200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-11-07T18:23:37.438Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-07T18:23:37.438Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-182142","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/11/07 18:23:37 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/11/07 18:23:37 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-07T18:23:37.618Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-11-07T18:23:37.630Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T18:23:37.631Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T18:23:37.631Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-182142","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [e26886dd54b5] <==
	* {"level":"warn","ts":"2022-11-07T18:24:34.243Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.0203ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-565d847f94-7hl75.17256092a95555a4\" ","response":"range_response_count:1 size:687"}
	{"level":"info","ts":"2022-11-07T18:24:34.243Z","caller":"traceutil/trace.go:171","msg":"trace[1565716881] range","detail":"{range_begin:/registry/events/kube-system/coredns-565d847f94-7hl75.17256092a95555a4; range_end:; response_count:1; response_revision:488; }","duration":"121.262ms","start":"2022-11-07T18:24:34.122Z","end":"2022-11-07T18:24:34.243Z","steps":["trace[1565716881] 'agreement among raft nodes before linearized reading'  (duration: 26.0553ms)","trace[1565716881] 'range keys from in-memory index tree'  (duration: 94.9425ms)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T18:24:35.138Z","caller":"traceutil/trace.go:171","msg":"trace[1241044910] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:520; }","duration":"203.6436ms","start":"2022-11-07T18:24:34.934Z","end":"2022-11-07T18:24:35.138Z","steps":["trace[1241044910] 'read index received'  (duration: 203.6323ms)","trace[1241044910] 'applied index is now lower than readState.Index'  (duration: 8.4µs)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:24:35.150Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"215.3801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2022-11-07T18:24:35.150Z","caller":"traceutil/trace.go:171","msg":"trace[1403645031] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:494; }","duration":"215.5537ms","start":"2022-11-07T18:24:34.934Z","end":"2022-11-07T18:24:35.150Z","steps":["trace[1403645031] 'agreement among raft nodes before linearized reading'  (duration: 203.8569ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:24:42.356Z","caller":"traceutil/trace.go:171","msg":"trace[767784216] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"224.45ms","start":"2022-11-07T18:24:42.131Z","end":"2022-11-07T18:24:42.356Z","steps":["trace[767784216] 'process raft request'  (duration: 154.6735ms)","trace[767784216] 'compare'  (duration: 69.5201ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:24:43.245Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:43.249Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"527.5134ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289944429000143197 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-182142.172560a13e61a5e0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-182142.172560a13e61a5e0\" value_size:514 lease:2289944429000142961 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-11-07T18:24:43.249Z","caller":"traceutil/trace.go:171","msg":"trace[1999920095] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"529.0982ms","start":"2022-11-07T18:24:42.720Z","end":"2022-11-07T18:24:43.249Z","steps":["trace[1999920095] 'compare'  (duration: 527.1569ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:24:43.249Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:24:42.720Z","time spent":"529.2561ms","remote":"127.0.0.1:39926","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":586,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-182142.172560a13e61a5e0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-182142.172560a13e61a5e0\" value_size:514 lease:2289944429000142961 >> failure:<>"}
	WARNING: 2022/11/07 18:24:43 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-11-07T18:24:43.506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:24:42.841Z","time spent":"664.9013ms","remote":"127.0.0.1:39946","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	WARNING: 2022/11/07 18:24:43 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-11-07T18:24:43.746Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:44.247Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:44.749Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:45.250Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:45.751Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:46.252Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:46.754Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289944429000143192,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-11-07T18:24:47.176Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"4.4555571s","expected-duration":"1s"}
	{"level":"info","ts":"2022-11-07T18:24:47.212Z","caller":"traceutil/trace.go:171","msg":"trace[741265500] linearizableReadLoop","detail":"{readStateIndex:543; appliedIndex:541; }","duration":"4.4679427s","start":"2022-11-07T18:24:42.744Z","end":"2022-11-07T18:24:47.212Z","steps":["trace[741265500] 'read index received'  (duration: 4.432951s)","trace[741265500] 'applied index is now lower than readState.Index'  (duration: 34.9871ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:24:47.212Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"4.468278s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1112"}
	{"level":"info","ts":"2022-11-07T18:24:47.212Z","caller":"traceutil/trace.go:171","msg":"trace[1159948020] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:515; }","duration":"4.468355s","start":"2022-11-07T18:24:42.744Z","end":"2022-11-07T18:24:47.212Z","steps":["trace[1159948020] 'agreement among raft nodes before linearized reading'  (duration: 4.4682304s)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:24:47.212Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:24:42.744Z","time spent":"4.4684298s","remote":"127.0.0.1:39944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	
	* 
	* ==> kernel <==
	*  18:25:23 up  1:40,  0 users,  load average: 7.17, 7.30, 4.46
	Linux pause-182142 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [004295df2ac3] <==
	* Trace[6906245]: ---"About to write a response" 608ms (18:24:31.972)
	Trace[6906245]: [609.538ms] [609.538ms] END
	I1107 18:24:43.250660       1 trace.go:205] Trace[259362888]: "Create etcd3" audit-id:83ab0b48-cc26-4416-8e9d-0e6f961e15fe,key:/events/default/pause-182142.172560a13e61a5e0,type:*core.Event (07-Nov-2022 18:24:42.667) (total time: 583ms):
	Trace[259362888]: ---"Txn call finished" err:<nil> 582ms (18:24:43.250)
	Trace[259362888]: [583.0138ms] [583.0138ms] END
	I1107 18:24:43.251339       1 trace.go:205] Trace[1047769700]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:83ab0b48-cc26-4416-8e9d-0e6f961e15fe,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:24:42.665) (total time: 585ms):
	Trace[1047769700]: ---"Write to database call finished" len:254,err:<nil> 584ms (18:24:43.250)
	Trace[1047769700]: [585.6217ms] [585.6217ms] END
	{"level":"warn","ts":"2022-11-07T18:24:43.505Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ac9500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1107 18:24:43.505859       1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
	E1107 18:24:43.506024       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E1107 18:24:43.506031       1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
	{"level":"warn","ts":"2022-11-07T18:24:43.505Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0019f5dc0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1107 18:24:43.506083       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 16.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	I1107 18:24:43.506124       1 trace.go:205] Trace[249348945]: "GuaranteedUpdate etcd3" audit-id:2c0f0cd8-d841-4060-9f3d-7a9fc782a4fb,key:/minions/pause-182142,type:*core.Node (07-Nov-2022 18:24:42.834) (total time: 671ms):
	Trace[249348945]: ---"Txn call finished" err:context canceled 665ms (18:24:43.506)
	Trace[249348945]: [671.305ms] [671.305ms] END
	E1107 18:24:43.506157       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 274.5µs, panicked: false, err: context canceled, panic-reason: <nil>
	E1107 18:24:43.507608       1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I1107 18:24:43.508988       1 trace.go:205] Trace[937414451]: "Patch" url:/api/v1/nodes/pause-182142/status,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:2c0f0cd8-d841-4060-9f3d-7a9fc782a4fb,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:24:42.834) (total time: 674ms):
	Trace[937414451]: [674.5295ms] [674.5295ms] END
	E1107 18:24:43.509352       1 timeout.go:141] post-timeout activity - time-elapsed: 3.6133ms, PATCH "/api/v1/nodes/pause-182142/status" result: <nil>
	E1107 18:24:43.510308       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E1107 18:24:43.515827       1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E1107 18:24:43.517399       1 timeout.go:141] post-timeout activity - time-elapsed: 11.6074ms, POST "/api/v1/namespaces/default/events" result: <nil>
	
	* 
	* ==> kube-apiserver [7bf4bc1418d5] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:23:52.320591       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:23:52.428151       1 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:23:53.057919       1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [851a800aae35] <==
	* 
	* 
	* ==> kube-controller-manager [d747119d4a25] <==
	* I1107 18:24:35.518319       1 shared_informer.go:262] Caches are synced for endpoint
	I1107 18:24:35.518337       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1107 18:24:35.518676       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1107 18:24:35.518681       1 shared_informer.go:262] Caches are synced for disruption
	I1107 18:24:35.518829       1 shared_informer.go:262] Caches are synced for deployment
	I1107 18:24:35.518995       1 shared_informer.go:262] Caches are synced for attach detach
	I1107 18:24:35.519231       1 shared_informer.go:262] Caches are synced for node
	I1107 18:24:35.521033       1 range_allocator.go:166] Starting range CIDR allocator
	I1107 18:24:35.521085       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1107 18:24:35.521118       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1107 18:24:35.519558       1 shared_informer.go:262] Caches are synced for daemon sets
	I1107 18:24:35.528357       1 shared_informer.go:262] Caches are synced for taint
	I1107 18:24:35.528581       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1107 18:24:35.528616       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1107 18:24:35.528666       1 taint_manager.go:209] "Sending events to api server"
	W1107 18:24:35.528843       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-182142. Assuming now as a timestamp.
	I1107 18:24:35.529078       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1107 18:24:35.529171       1 event.go:294] "Event occurred" object="pause-182142" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-182142 event: Registered Node pause-182142 in Controller"
	I1107 18:24:35.530995       1 shared_informer.go:262] Caches are synced for persistent volume
	I1107 18:24:35.540605       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1107 18:24:35.626996       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:24:35.638396       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:24:35.946454       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:24:35.958971       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:24:35.959090       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [704cf9f28fe3] <==
	* I1107 18:24:15.728232       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 18:24:15.732303       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 18:24:15.735418       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 18:24:15.817738       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E1107 18:24:15.826963       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-182142": dial tcp 192.168.67.2:8443: connect: connection refused
	I1107 18:24:22.324348       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I1107 18:24:22.324526       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I1107 18:24:22.324575       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 18:24:22.718077       1 server_others.go:206] "Using iptables Proxier"
	I1107 18:24:22.718373       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 18:24:22.718400       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 18:24:22.718426       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 18:24:22.718460       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:24:22.718881       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:24:22.719449       1 server.go:661] "Version info" version="v1.25.3"
	I1107 18:24:22.719553       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 18:24:22.720824       1 config.go:444] "Starting node config controller"
	I1107 18:24:22.721031       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 18:24:22.721041       1 config.go:226] "Starting endpoint slice config controller"
	I1107 18:24:22.720891       1 config.go:317] "Starting service config controller"
	I1107 18:24:22.721369       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 18:24:22.721378       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 18:24:22.830764       1 shared_informer.go:262] Caches are synced for service config
	I1107 18:24:22.830880       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 18:24:22.830846       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [bf7a05217ed0] <==
	* E1107 18:23:39.853343       1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I1107 18:23:39.856925       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1107 18:23:39.859506       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 18:23:39.862717       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 18:23:39.865636       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 18:23:39.868277       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E1107 18:23:39.875796       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-182142": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 18:23:41.048436       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-182142": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [733b0d85b57b] <==
	* E1107 18:22:53.917868       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 18:22:53.918128       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 18:22:53.918266       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 18:22:54.008910       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 18:22:54.009042       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1107 18:22:54.036844       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 18:22:54.036999       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 18:22:54.117852       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 18:22:54.118007       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 18:22:54.285237       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 18:22:54.285416       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 18:22:54.316684       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 18:22:54.316831       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 18:22:54.323838       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 18:22:54.325001       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 18:22:54.337816       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 18:22:54.337973       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1107 18:22:56.019019       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 18:22:56.019343       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1107 18:23:01.523293       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 18:23:37.418953       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1107 18:23:37.419190       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1107 18:23:37.419387       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I1107 18:23:37.419580       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1107 18:23:37.420157       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [7b2ec2b1aa93] <==
	* I1107 18:24:15.118739       1 serving.go:348] Generated self-signed cert in-memory
	W1107 18:24:22.217614       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 18:24:22.217765       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 18:24:22.217783       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 18:24:22.217794       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 18:24:22.419964       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1107 18:24:22.420464       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 18:24:22.424807       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 18:24:22.425732       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 18:24:22.425858       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 18:24:22.425914       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 18:24:22.531171       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 18:22:17 UTC, end at Mon 2022-11-07 18:25:24 UTC. --
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.842502    6389 kubelet_node_status.go:73] "Successfully registered node" node="pause-182142"
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.928653    6389 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.929093    6389 status_manager.go:161] "Starting to sync pod status with apiserver"
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.929331    6389 kubelet.go:2010] "Starting kubelet main sync loop"
	Nov 07 18:24:41 pause-182142 kubelet[6389]: E1107 18:24:41.929475    6389 kubelet.go:2034] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	Nov 07 18:24:41 pause-182142 kubelet[6389]: I1107 18:24:41.944017    6389 setters.go:545] "Node became not ready" node="pause-182142" condition={Type:Ready Status:False LastHeartbeatTime:2022-11-07 18:24:41.9439341 +0000 UTC m=+0.766258701 LastTransitionTime:2022-11-07 18:24:41.9439341 +0000 UTC m=+0.766258701 Reason:KubeletNotReady Message:[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]}
	Nov 07 18:24:42 pause-182142 kubelet[6389]: E1107 18:24:42.030671    6389 kubelet.go:2034] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: E1107 18:24:42.231739    6389 kubelet.go:2034] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.448312    6389 apiserver.go:52] "Watching apiserver"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: E1107 18:24:42.634482    6389 kubelet.go:2034] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.645479    6389 cpu_manager.go:213] "Starting CPU manager" policy="none"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.645673    6389 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.645835    6389 state_mem.go:36] "Initialized new in-memory state store"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.646317    6389 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.646513    6389 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.646538    6389 policy_none.go:49] "None policy: Start"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.659011    6389 memory_manager.go:168] "Starting memorymanager" policy="None"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.659132    6389 state_mem.go:35] "Initializing new in-memory state store"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.659343    6389 state_mem.go:75] "Updated machine memory state"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.662268    6389 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	Nov 07 18:24:42 pause-182142 kubelet[6389]: I1107 18:24:42.663151    6389 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	Nov 07 18:24:43 pause-182142 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Nov 07 18:24:43 pause-182142 kubelet[6389]: I1107 18:24:43.397064    6389 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 07 18:24:43 pause-182142 systemd[1]: kubelet.service: Succeeded.
	Nov 07 18:24:43 pause-182142 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [5ee28a460839] <==
	* I1107 18:24:34.049779       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 18:24:34.079225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 18:24:34.079463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 18:24:34.091698       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 18:24:34.092098       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-182142_c8fb11ca-e266-48c2-8785-532891fe6ab6!
	I1107 18:24:34.121150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1dfff7be-9253-472b-811a-6e2ac2fa00b0", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-182142_c8fb11ca-e266-48c2-8785-532891fe6ab6 became leader
	I1107 18:24:34.292346       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-182142_c8fb11ca-e266-48c2-8785-532891fe6ab6!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 18:25:22.956459    8528 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-182142 -n pause-182142
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-182142 -n pause-182142: exit status 2 (1.8434136s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-182142" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/PauseAgain (45.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (583.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-182331 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker
E1107 18:42:21.826789    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:21.842339    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:21.858273    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:21.889059    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:21.935977    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:22.028334    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:22.195521    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:22.529493    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:23.178712    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:24.460804    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:27.021677    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:32.152008    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:42.343125    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:42:42.403516    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:42:44.509245    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:44.524145    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:44.539337    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:44.571024    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:44.618804    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:44.714329    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:44.887506    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:45.219262    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:45.860320    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:47.141374    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:49.709101    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:42:54.846049    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:43:02.887913    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:43:05.097509    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:43:25.588230    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-182331 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (9m43.226193s)

                                                
                                                
-- stdout --
	* [cilium-182331] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cilium-182331 in cluster cilium-182331
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 18:42:11.633515    9840 out.go:296] Setting OutFile to fd 1728 ...
	I1107 18:42:11.714641    9840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:42:11.714641    9840 out.go:309] Setting ErrFile to fd 1872...
	I1107 18:42:11.714641    9840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:42:11.733638    9840 out.go:303] Setting JSON to false
	I1107 18:42:11.739650    9840 start.go:116] hostinfo: {"hostname":"minikube2","uptime":11169,"bootTime":1667835362,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 18:42:11.739650    9840 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 18:42:11.743649    9840 out.go:177] * [cilium-182331] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 18:42:11.748643    9840 notify.go:220] Checking for updates...
	I1107 18:42:11.750648    9840 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:42:11.753639    9840 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 18:42:11.756642    9840 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 18:42:11.759693    9840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 18:42:11.766642    9840 config.go:180] Loaded profile config "auto-182327": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:42:11.766642    9840 config.go:180] Loaded profile config "kindnet-182329": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:42:11.767647    9840 config.go:180] Loaded profile config "newest-cni-184042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:42:11.767647    9840 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 18:42:12.085490    9840 docker.go:137] docker version: linux-20.10.20
	I1107 18:42:12.095493    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:42:12.746987    9840 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:42:12.2582147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:42:12.750999    9840 out.go:177] * Using the docker driver based on user configuration
	I1107 18:42:12.752978    9840 start.go:282] selected driver: docker
	I1107 18:42:12.752978    9840 start.go:808] validating driver "docker" against <nil>
	I1107 18:42:12.752978    9840 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 18:42:12.829400    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:42:13.520637    9840 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:42:13.0126478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:42:13.520884    9840 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 18:42:13.521442    9840 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 18:42:13.526168    9840 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 18:42:13.528832    9840 cni.go:95] Creating CNI manager for "cilium"
	I1107 18:42:13.528832    9840 start_flags.go:312] Found "Cilium" CNI - setting NetworkPlugin=cni
	I1107 18:42:13.528832    9840 start_flags.go:317] config:
	{Name:cilium-182331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:42:13.532776    9840 out.go:177] * Starting control plane node cilium-182331 in cluster cilium-182331
	I1107 18:42:13.538775    9840 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 18:42:13.542775    9840 out.go:177] * Pulling base image ...
	I1107 18:42:13.548787    9840 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:42:13.548787    9840 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 18:42:13.548787    9840 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 18:42:13.548787    9840 cache.go:57] Caching tarball of preloaded images
	I1107 18:42:13.548787    9840 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 18:42:13.549762    9840 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 18:42:13.549762    9840 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\config.json ...
	I1107 18:42:13.549762    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\config.json: {Name:mk88ce54eb004f8cba782de92fb1bd999e42738e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:42:13.760655    9840 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 18:42:13.760655    9840 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 18:42:13.760655    9840 cache.go:208] Successfully downloaded all kic artifacts
	I1107 18:42:13.760655    9840 start.go:364] acquiring machines lock for cilium-182331: {Name:mk603f1471c97f44fda8cf858c64229d61c2a8cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 18:42:13.760655    9840 start.go:368] acquired machines lock for "cilium-182331" in 0s
	I1107 18:42:13.761231    9840 start.go:93] Provisioning new machine with config: &{Name:cilium-182331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:42:13.761434    9840 start.go:125] createHost starting for "" (driver="docker")
	I1107 18:42:13.773932    9840 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 18:42:13.773932    9840 start.go:159] libmachine.API.Create for "cilium-182331" (driver="docker")
	I1107 18:42:13.773932    9840 client.go:168] LocalClient.Create starting
	I1107 18:42:13.773932    9840 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1107 18:42:13.773932    9840 main.go:134] libmachine: Decoding PEM data...
	I1107 18:42:13.773932    9840 main.go:134] libmachine: Parsing certificate...
	I1107 18:42:13.775391    9840 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1107 18:42:13.775857    9840 main.go:134] libmachine: Decoding PEM data...
	I1107 18:42:13.775857    9840 main.go:134] libmachine: Parsing certificate...
	I1107 18:42:13.784368    9840 cli_runner.go:164] Run: docker network inspect cilium-182331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 18:42:13.963345    9840 cli_runner.go:211] docker network inspect cilium-182331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 18:42:13.971283    9840 network_create.go:272] running [docker network inspect cilium-182331] to gather additional debugging logs...
	I1107 18:42:13.971283    9840 cli_runner.go:164] Run: docker network inspect cilium-182331
	W1107 18:42:14.169364    9840 cli_runner.go:211] docker network inspect cilium-182331 returned with exit code 1
	I1107 18:42:14.169364    9840 network_create.go:275] error running [docker network inspect cilium-182331]: docker network inspect cilium-182331: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-182331
	I1107 18:42:14.169364    9840 network_create.go:277] output of [docker network inspect cilium-182331]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-182331
	
	** /stderr **
	I1107 18:42:14.176884    9840 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 18:42:14.439478    9840 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014ad88] misses:0}
	I1107 18:42:14.439478    9840 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:42:14.439478    9840 network_create.go:115] attempt to create docker network cilium-182331 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 18:42:14.448831    9840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-182331 cilium-182331
	W1107 18:42:14.650060    9840 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-182331 cilium-182331 returned with exit code 1
	W1107 18:42:14.650060    9840 network_create.go:107] failed to create docker network cilium-182331 192.168.49.0/24, will retry: subnet is taken
	I1107 18:42:14.671010    9840 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014ad88] amended:false}} dirty:map[] misses:0}
	I1107 18:42:14.671010    9840 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:42:14.692324    9840 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014ad88] amended:true}} dirty:map[192.168.49.0:0xc00014ad88 192.168.58.0:0xc00071c798] misses:0}
	I1107 18:42:14.692324    9840 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:42:14.692647    9840 network_create.go:115] attempt to create docker network cilium-182331 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 18:42:14.700571    9840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-182331 cilium-182331
	W1107 18:42:14.935016    9840 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-182331 cilium-182331 returned with exit code 1
	W1107 18:42:14.935119    9840 network_create.go:107] failed to create docker network cilium-182331 192.168.58.0/24, will retry: subnet is taken
	I1107 18:42:14.957504    9840 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014ad88] amended:true}} dirty:map[192.168.49.0:0xc00014ad88 192.168.58.0:0xc00071c798] misses:1}
	I1107 18:42:14.958001    9840 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:42:14.977816    9840 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014ad88] amended:true}} dirty:map[192.168.49.0:0xc00014ad88 192.168.58.0:0xc00071c798 192.168.67.0:0xc0007c0ce0] misses:1}
	I1107 18:42:14.978500    9840 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:42:14.978644    9840 network_create.go:115] attempt to create docker network cilium-182331 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 18:42:14.988818    9840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-182331 cilium-182331
	I1107 18:42:24.293549    9840 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-182331 cilium-182331: (9.3046301s)
	I1107 18:42:24.293808    9840 network_create.go:99] docker network cilium-182331 192.168.67.0/24 created
	I1107 18:42:24.293808    9840 kic.go:106] calculated static IP "192.168.67.2" for the "cilium-182331" container
	I1107 18:42:24.310920    9840 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 18:42:24.530862    9840 cli_runner.go:164] Run: docker volume create cilium-182331 --label name.minikube.sigs.k8s.io=cilium-182331 --label created_by.minikube.sigs.k8s.io=true
	I1107 18:42:24.743072    9840 oci.go:103] Successfully created a docker volume cilium-182331
	I1107 18:42:24.751056    9840 cli_runner.go:164] Run: docker run --rm --name cilium-182331-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-182331 --entrypoint /usr/bin/test -v cilium-182331:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 18:42:26.485617    9840 cli_runner.go:217] Completed: docker run --rm --name cilium-182331-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-182331 --entrypoint /usr/bin/test -v cilium-182331:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib: (1.7345423s)
	I1107 18:42:26.485617    9840 oci.go:107] Successfully prepared a docker volume cilium-182331
	I1107 18:42:26.485617    9840 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:42:26.485617    9840 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 18:42:26.493615    9840 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-182331:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 18:42:52.428722    9840 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-182331:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (25.9346839s)
	I1107 18:42:52.428864    9840 kic.go:188] duration metric: took 25.942964 seconds to extract preloaded images to volume
	I1107 18:42:52.437608    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:42:53.124769    9840 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:42:52.6014735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:42:53.133574    9840 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 18:42:53.867593    9840 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-182331 --name cilium-182331 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-182331 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-182331 --network cilium-182331 --ip 192.168.67.2 --volume cilium-182331:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 18:42:55.617691    9840 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-182331 --name cilium-182331 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-182331 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-182331 --network cilium-182331 --ip 192.168.67.2 --volume cilium-182331:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456: (1.750079s)
	I1107 18:42:55.626694    9840 cli_runner.go:164] Run: docker container inspect cilium-182331 --format={{.State.Running}}
	I1107 18:42:55.879968    9840 cli_runner.go:164] Run: docker container inspect cilium-182331 --format={{.State.Status}}
	I1107 18:42:56.129087    9840 cli_runner.go:164] Run: docker exec cilium-182331 stat /var/lib/dpkg/alternatives/iptables
	I1107 18:42:56.517986    9840 oci.go:144] the created container "cilium-182331" has a running status.
	I1107 18:42:56.517986    9840 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa...
	I1107 18:42:57.069509    9840 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 18:42:57.448352    9840 cli_runner.go:164] Run: docker container inspect cilium-182331 --format={{.State.Status}}
	I1107 18:42:57.710833    9840 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 18:42:57.710833    9840 kic_runner.go:114] Args: [docker exec --privileged cilium-182331 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 18:42:58.100887    9840 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa...
	I1107 18:42:58.714099    9840 cli_runner.go:164] Run: docker container inspect cilium-182331 --format={{.State.Status}}
	I1107 18:42:58.965005    9840 machine.go:88] provisioning docker machine ...
	I1107 18:42:58.965005    9840 ubuntu.go:169] provisioning hostname "cilium-182331"
	I1107 18:42:58.974028    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:42:59.208357    9840 main.go:134] libmachine: Using SSH client type: native
	I1107 18:42:59.214357    9840 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 61810 <nil> <nil>}
	I1107 18:42:59.215358    9840 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-182331 && echo "cilium-182331" | sudo tee /etc/hostname
	I1107 18:42:59.456550    9840 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-182331
	
	I1107 18:42:59.471149    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:42:59.739338    9840 main.go:134] libmachine: Using SSH client type: native
	I1107 18:42:59.739338    9840 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 61810 <nil> <nil>}
	I1107 18:42:59.739338    9840 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-182331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-182331/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-182331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 18:42:59.942028    9840 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 18:42:59.942028    9840 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1107 18:42:59.942028    9840 ubuntu.go:177] setting up certificates
	I1107 18:42:59.942028    9840 provision.go:83] configureAuth start
	I1107 18:42:59.955897    9840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-182331
	I1107 18:43:00.204577    9840 provision.go:138] copyHostCerts
	I1107 18:43:00.204989    9840 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1107 18:43:00.204989    9840 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1107 18:43:00.205996    9840 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1107 18:43:00.207590    9840 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1107 18:43:00.207590    9840 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1107 18:43:00.207590    9840 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1107 18:43:00.209591    9840 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1107 18:43:00.209591    9840 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1107 18:43:00.209591    9840 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1107 18:43:00.210587    9840 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-182331 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-182331]
	I1107 18:43:00.490627    9840 provision.go:172] copyRemoteCerts
	I1107 18:43:00.501602    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 18:43:00.510364    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:00.721931    9840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61810 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa Username:docker}
	I1107 18:43:00.875828    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 18:43:00.943656    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 18:43:01.000906    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 18:43:01.067687    9840 provision.go:86] duration metric: configureAuth took 1.1255657s
	I1107 18:43:01.067734    9840 ubuntu.go:193] setting minikube options for container-runtime
	I1107 18:43:01.068655    9840 config.go:180] Loaded profile config "cilium-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:43:01.079021    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:01.340540    9840 main.go:134] libmachine: Using SSH client type: native
	I1107 18:43:01.340789    9840 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 61810 <nil> <nil>}
	I1107 18:43:01.340789    9840 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 18:43:01.526764    9840 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 18:43:01.526764    9840 ubuntu.go:71] root file system type: overlay
	I1107 18:43:01.526764    9840 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 18:43:01.534756    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:01.776569    9840 main.go:134] libmachine: Using SSH client type: native
	I1107 18:43:01.776569    9840 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 61810 <nil> <nil>}
	I1107 18:43:01.776569    9840 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 18:43:02.026902    9840 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 18:43:02.040703    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:02.275462    9840 main.go:134] libmachine: Using SSH client type: native
	I1107 18:43:02.275462    9840 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 61810 <nil> <nil>}
	I1107 18:43:02.275462    9840 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 18:43:03.773014    9840 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 18:43:01.999988000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 18:43:03.773014    9840 machine.go:91] provisioned docker machine in 4.8079568s
	I1107 18:43:03.773014    9840 client.go:171] LocalClient.Create took 49.9985372s
	I1107 18:43:03.773014    9840 start.go:167] duration metric: libmachine.API.Create for "cilium-182331" took 49.9985372s
	I1107 18:43:03.773014    9840 start.go:300] post-start starting for "cilium-182331" (driver="docker")
	I1107 18:43:03.773014    9840 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 18:43:03.791940    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 18:43:03.798433    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:04.077452    9840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61810 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa Username:docker}
	I1107 18:43:04.254643    9840 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 18:43:04.271154    9840 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 18:43:04.271154    9840 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 18:43:04.271154    9840 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 18:43:04.271154    9840 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 18:43:04.271154    9840 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1107 18:43:04.271154    9840 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1107 18:43:04.272153    9840 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem -> 99482.pem in /etc/ssl/certs
	I1107 18:43:04.287170    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 18:43:04.312151    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem --> /etc/ssl/certs/99482.pem (1708 bytes)
	I1107 18:43:04.373409    9840 start.go:303] post-start completed in 600.3889ms
	I1107 18:43:04.385400    9840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-182331
	I1107 18:43:04.611648    9840 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\config.json ...
	I1107 18:43:04.624650    9840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 18:43:04.632670    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:04.878691    9840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61810 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa Username:docker}
	I1107 18:43:05.007051    9840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 18:43:05.018063    9840 start.go:128] duration metric: createHost completed in 51.2560704s
	I1107 18:43:05.018063    9840 start.go:83] releasing machines lock for "cilium-182331", held for 51.2568493s
	I1107 18:43:05.027051    9840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-182331
	I1107 18:43:05.275670    9840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 18:43:05.286545    9840 ssh_runner.go:195] Run: systemctl --version
	I1107 18:43:05.291672    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:05.294680    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:05.523958    9840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61810 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa Username:docker}
	I1107 18:43:05.551137    9840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61810 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa Username:docker}
	I1107 18:43:05.690988    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 18:43:05.793992    9840 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1107 18:43:05.854361    9840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 18:43:06.017166    9840 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 18:43:06.262660    9840 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 18:43:06.304297    9840 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 18:43:06.315302    9840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 18:43:06.343009    9840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 18:43:06.406490    9840 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 18:43:06.598191    9840 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 18:43:06.780660    9840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 18:43:07.022814    9840 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 18:43:07.645975    9840 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 18:43:07.891331    9840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 18:43:08.127507    9840 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 18:43:08.165310    9840 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 18:43:08.181485    9840 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 18:43:08.192483    9840 start.go:472] Will wait 60s for crictl version
	I1107 18:43:08.203476    9840 ssh_runner.go:195] Run: sudo crictl version
	I1107 18:43:08.292953    9840 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 18:43:08.304701    9840 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 18:43:08.394244    9840 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 18:43:08.476013    9840 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 18:43:08.484022    9840 cli_runner.go:164] Run: docker exec -t cilium-182331 dig +short host.docker.internal
	I1107 18:43:08.950146    9840 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 18:43:08.972156    9840 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 18:43:08.984150    9840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 18:43:09.018865    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:09.269282    9840 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:43:09.278085    9840 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 18:43:09.341511    9840 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 18:43:09.342524    9840 docker.go:543] Images already preloaded, skipping extraction
	I1107 18:43:09.357523    9840 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 18:43:09.449535    9840 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 18:43:09.449535    9840 cache_images.go:84] Images are preloaded, skipping loading
	I1107 18:43:09.466593    9840 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 18:43:09.648355    9840 cni.go:95] Creating CNI manager for "cilium"
	I1107 18:43:09.648355    9840 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 18:43:09.648355    9840 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-182331 NodeName:cilium-182331 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 18:43:09.648355    9840 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cilium-182331"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 18:43:09.648355    9840 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-182331 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:cilium-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I1107 18:43:09.665351    9840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 18:43:09.697007    9840 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 18:43:09.708879    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 18:43:09.741650    9840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I1107 18:43:09.840487    9840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 18:43:09.894857    9840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I1107 18:43:09.953468    9840 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 18:43:09.966489    9840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 18:43:09.991481    9840 certs.go:54] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331 for IP: 192.168.67.2
	I1107 18:43:09.991481    9840 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I1107 18:43:09.992483    9840 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I1107 18:43:09.992483    9840 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\client.key
	I1107 18:43:09.992483    9840 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\client.crt with IP's: []
	I1107 18:43:10.191444    9840 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\client.crt ...
	I1107 18:43:10.191444    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\client.crt: {Name:mke8136d94263a2bf2f04989675d2a3d9c524a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:43:10.192452    9840 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\client.key ...
	I1107 18:43:10.192452    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\client.key: {Name:mkf3f75915b661fb89d360b5b43b176cc304339d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:43:10.194453    9840 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.key.c7fa3a9e
	I1107 18:43:10.194453    9840 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 18:43:10.493136    9840 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.crt.c7fa3a9e ...
	I1107 18:43:10.493136    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.crt.c7fa3a9e: {Name:mk392248c5a165729187e723fb346eb05173476e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:43:10.495138    9840 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.key.c7fa3a9e ...
	I1107 18:43:10.495138    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.key.c7fa3a9e: {Name:mkc6d199d47d8cfde07706cd3b2dd6a7abde2dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:43:10.496142    9840 certs.go:320] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.crt
	I1107 18:43:10.504144    9840 certs.go:324] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.key
	I1107 18:43:10.505133    9840 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\proxy-client.key
	I1107 18:43:10.506131    9840 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\proxy-client.crt with IP's: []
	I1107 18:43:11.077268    9840 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\proxy-client.crt ...
	I1107 18:43:11.077268    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\proxy-client.crt: {Name:mkea041861d480f4f4a03fbb69cfe50d56cc21f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:43:11.079781    9840 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\proxy-client.key ...
	I1107 18:43:11.079781    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\proxy-client.key: {Name:mk2ab9d6b26284911ebdf988e3268e901b333b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:43:11.094104    9840 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948.pem (1338 bytes)
	W1107 18:43:11.094104    9840 certs.go:384] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948_empty.pem, impossibly tiny 0 bytes
	I1107 18:43:11.094104    9840 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1107 18:43:11.094104    9840 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1107 18:43:11.094104    9840 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1107 18:43:11.095713    9840 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1107 18:43:11.095713    9840 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem (1708 bytes)
	I1107 18:43:11.097402    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 18:43:11.166825    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 18:43:11.237995    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 18:43:11.301604    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-182331\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 18:43:11.374476    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 18:43:11.426981    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 18:43:11.501159    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 18:43:11.571429    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 18:43:11.623457    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 18:43:11.704505    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948.pem --> /usr/share/ca-certificates/9948.pem (1338 bytes)
	I1107 18:43:11.776078    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem --> /usr/share/ca-certificates/99482.pem (1708 bytes)
	I1107 18:43:11.857042    9840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 18:43:11.910141    9840 ssh_runner.go:195] Run: openssl version
	I1107 18:43:11.959893    9840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9948.pem && ln -fs /usr/share/ca-certificates/9948.pem /etc/ssl/certs/9948.pem"
	I1107 18:43:11.996903    9840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9948.pem
	I1107 18:43:12.006907    9840 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 17:01 /usr/share/ca-certificates/9948.pem
	I1107 18:43:12.016905    9840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9948.pem
	I1107 18:43:12.048917    9840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9948.pem /etc/ssl/certs/51391683.0"
	I1107 18:43:12.097641    9840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99482.pem && ln -fs /usr/share/ca-certificates/99482.pem /etc/ssl/certs/99482.pem"
	I1107 18:43:12.131861    9840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99482.pem
	I1107 18:43:12.143866    9840 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 17:01 /usr/share/ca-certificates/99482.pem
	I1107 18:43:12.156864    9840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99482.pem
	I1107 18:43:12.179912    9840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 18:43:12.219129    9840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 18:43:12.263604    9840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:43:12.279916    9840 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:43:12.291775    9840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:43:12.336308    9840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 18:43:12.370609    9840 kubeadm.go:396] StartCluster: {Name:cilium-182331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:43:12.385763    9840 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 18:43:12.484046    9840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 18:43:12.514866    9840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 18:43:12.541902    9840 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 18:43:12.561110    9840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 18:43:12.587047    9840 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 18:43:12.587047    9840 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 18:43:12.712020    9840 kubeadm.go:317] W1107 18:43:12.708699    1221 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 18:43:12.813714    9840 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 18:43:13.031092    9840 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 18:43:40.141251    9840 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 18:43:40.142254    9840 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 18:43:40.142254    9840 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 18:43:40.142254    9840 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 18:43:40.142254    9840 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 18:43:40.143272    9840 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 18:43:40.149258    9840 out.go:204]   - Generating certificates and keys ...
	I1107 18:43:40.149258    9840 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 18:43:40.149258    9840 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 18:43:40.149258    9840 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 18:43:40.149258    9840 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 18:43:40.149258    9840 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 18:43:40.150279    9840 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 18:43:40.150279    9840 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 18:43:40.150279    9840 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [cilium-182331 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1107 18:43:40.150279    9840 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 18:43:40.151309    9840 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [cilium-182331 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1107 18:43:40.151309    9840 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 18:43:40.151309    9840 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 18:43:40.152269    9840 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 18:43:40.152269    9840 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 18:43:40.152269    9840 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 18:43:40.152269    9840 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 18:43:40.152269    9840 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 18:43:40.153334    9840 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 18:43:40.153334    9840 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 18:43:40.153334    9840 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 18:43:40.153334    9840 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 18:43:40.154264    9840 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 18:43:40.159286    9840 out.go:204]   - Booting up control plane ...
	I1107 18:43:40.159286    9840 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 18:43:40.160334    9840 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 18:43:40.160334    9840 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 18:43:40.160334    9840 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 18:43:40.161262    9840 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 18:43:40.161262    9840 kubeadm.go:317] [apiclient] All control plane components are healthy after 21.006437 seconds
	I1107 18:43:40.161262    9840 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 18:43:40.161262    9840 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 18:43:40.161262    9840 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 18:43:40.162294    9840 kubeadm.go:317] [mark-control-plane] Marking the node cilium-182331 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 18:43:40.162294    9840 kubeadm.go:317] [bootstrap-token] Using token: 5jdbjw.wk75r9btgo8b08tu
	I1107 18:43:40.165339    9840 out.go:204]   - Configuring RBAC rules ...
	I1107 18:43:40.165339    9840 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 18:43:40.165339    9840 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 18:43:40.166265    9840 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 18:43:40.166265    9840 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 18:43:40.166265    9840 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 18:43:40.167262    9840 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 18:43:40.167262    9840 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 18:43:40.167262    9840 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1107 18:43:40.167262    9840 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1107 18:43:40.167262    9840 kubeadm.go:317] 
	I1107 18:43:40.167262    9840 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1107 18:43:40.167262    9840 kubeadm.go:317] 
	I1107 18:43:40.168262    9840 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1107 18:43:40.168262    9840 kubeadm.go:317] 
	I1107 18:43:40.168262    9840 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1107 18:43:40.168262    9840 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 18:43:40.168262    9840 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 18:43:40.168262    9840 kubeadm.go:317] 
	I1107 18:43:40.168262    9840 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1107 18:43:40.169266    9840 kubeadm.go:317] 
	I1107 18:43:40.169266    9840 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 18:43:40.169266    9840 kubeadm.go:317] 
	I1107 18:43:40.169266    9840 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1107 18:43:40.169266    9840 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 18:43:40.170267    9840 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 18:43:40.170267    9840 kubeadm.go:317] 
	I1107 18:43:40.170267    9840 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 18:43:40.170267    9840 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1107 18:43:40.170267    9840 kubeadm.go:317] 
	I1107 18:43:40.170267    9840 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 5jdbjw.wk75r9btgo8b08tu \
	I1107 18:43:40.171254    9840 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:5ee7b05911e14fac42df88d6576770cfc35fa970444b7ab659b27324c22502ae \
	I1107 18:43:40.171254    9840 kubeadm.go:317] 	--control-plane 
	I1107 18:43:40.171254    9840 kubeadm.go:317] 
	I1107 18:43:40.171254    9840 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1107 18:43:40.171254    9840 kubeadm.go:317] 
	I1107 18:43:40.171254    9840 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 5jdbjw.wk75r9btgo8b08tu \
	I1107 18:43:40.172264    9840 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:5ee7b05911e14fac42df88d6576770cfc35fa970444b7ab659b27324c22502ae 
	I1107 18:43:40.172264    9840 cni.go:95] Creating CNI manager for "cilium"
	I1107 18:43:40.177272    9840 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I1107 18:43:40.194255    9840 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I1107 18:43:40.351299    9840 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I1107 18:43:40.351299    9840 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I1107 18:43:40.351299    9840 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I1107 18:43:40.352278    9840 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1107 18:43:40.352278    9840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I1107 18:43:40.657080    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 18:43:43.854596    9840 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.1974815s)
	I1107 18:43:43.854596    9840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 18:43:43.875582    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:43.876584    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262 minikube.k8s.io/name=cilium-182331 minikube.k8s.io/updated_at=2022_11_07T18_43_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:43.881607    9840 ops.go:34] apiserver oom_adj: -16
	I1107 18:43:44.278600    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:45.080564    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:45.578324    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:46.071123    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:46.573820    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:47.075193    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:47.572673    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:48.065711    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:48.577560    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:49.075542    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:49.573817    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:50.069385    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:50.569841    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:51.580398    9840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:43:52.450774    9840 kubeadm.go:1067] duration metric: took 8.5960845s to wait for elevateKubeSystemPrivileges.
	I1107 18:43:52.450774    9840 kubeadm.go:398] StartCluster complete in 40.0797637s
	I1107 18:43:52.450774    9840 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:43:52.451572    9840 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:43:52.456600    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:43:53.146557    9840 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-182331" rescaled to 1
	I1107 18:43:53.146557    9840 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:43:53.149579    9840 out.go:177] * Verifying Kubernetes components...
	I1107 18:43:53.147568    9840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 18:43:53.147568    9840 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I1107 18:43:53.148546    9840 config.go:180] Loaded profile config "cilium-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:43:53.154368    9840 addons.go:65] Setting storage-provisioner=true in profile "cilium-182331"
	I1107 18:43:53.154368    9840 addons.go:227] Setting addon storage-provisioner=true in "cilium-182331"
	W1107 18:43:53.154368    9840 addons.go:236] addon storage-provisioner should already be in state true
	I1107 18:43:53.154368    9840 addons.go:65] Setting default-storageclass=true in profile "cilium-182331"
	I1107 18:43:53.154368    9840 host.go:66] Checking if "cilium-182331" exists ...
	I1107 18:43:53.154368    9840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-182331"
	I1107 18:43:53.176440    9840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 18:43:53.190437    9840 cli_runner.go:164] Run: docker container inspect cilium-182331 --format={{.State.Status}}
	I1107 18:43:53.191440    9840 cli_runner.go:164] Run: docker container inspect cilium-182331 --format={{.State.Status}}
	I1107 18:43:53.444269    9840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 18:43:53.447392    9840 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 18:43:53.447392    9840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 18:43:53.462387    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:53.537796    9840 addons.go:227] Setting addon default-storageclass=true in "cilium-182331"
	W1107 18:43:53.537796    9840 addons.go:236] addon default-storageclass should already be in state true
	I1107 18:43:53.538295    9840 host.go:66] Checking if "cilium-182331" exists ...
	I1107 18:43:53.570938    9840 cli_runner.go:164] Run: docker container inspect cilium-182331 --format={{.State.Status}}
	I1107 18:43:53.706922    9840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61810 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa Username:docker}
	I1107 18:43:53.818445    9840 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 18:43:53.818445    9840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 18:43:53.826317    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:53.855331    9840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 18:43:53.872336    9840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-182331
	I1107 18:43:54.088323    9840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61810 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-182331\id_rsa Username:docker}
	I1107 18:43:54.106323    9840 node_ready.go:35] waiting up to 5m0s for node "cilium-182331" to be "Ready" ...
	I1107 18:43:54.138906    9840 node_ready.go:49] node "cilium-182331" has status "Ready":"True"
	I1107 18:43:54.138906    9840 node_ready.go:38] duration metric: took 32.5823ms waiting for node "cilium-182331" to be "Ready" ...
	I1107 18:43:54.138906    9840 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 18:43:54.250563    9840 pod_ready.go:78] waiting up to 5m0s for pod "cilium-k65t2" in "kube-system" namespace to be "Ready" ...
	I1107 18:43:54.277855    9840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 18:43:55.052534    9840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 18:43:56.440491    9840 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.5838793s)
	I1107 18:43:56.440491    9840 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1107 18:43:56.538608    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:43:57.935337    9840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.657442s)
	I1107 18:43:57.936359    9840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8837938s)
	I1107 18:43:57.941400    9840 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 18:43:57.944357    9840 addons.go:488] enableAddons completed in 4.7967368s
	I1107 18:43:58.933736    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:01.537406    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:03.893985    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:06.377260    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:08.387605    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:10.884713    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:13.386845    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:15.886214    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:20.943039    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:23.378588    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:25.381904    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:27.391606    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:29.880326    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:31.940516    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:33.941643    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:36.389300    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:38.881471    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:40.881937    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:43.135047    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:45.452545    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:47.952492    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:50.762810    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:52.876716    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:54.955720    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:57.451159    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:03.140899    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:05.374983    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:07.447507    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:09.944107    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:12.037182    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:14.192741    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:16.543149    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:18.951121    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:22.172039    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:24.541165    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:26.644547    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:29.522012    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:34.358576    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:36.384043    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:38.876455    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:40.876812    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:43.373755    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:45.390619    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:47.874134    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:50.123103    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:52.380123    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:54.397969    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:56.878180    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:59.384051    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:01.882097    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:04.384669    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:06.391499    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:11.992421    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:14.382325    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:16.884916    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:19.392454    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:21.441039    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:23.882319    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:25.898102    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:28.378657    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:30.383930    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:32.891538    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:35.380652    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:37.890008    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:40.389305    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:42.895935    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:45.378718    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:47.380847    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:49.389113    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:51.883402    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:54.380843    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:56.884673    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:58.888387    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:01.376112    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:03.380584    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:05.385074    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:07.940072    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:10.387257    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:12.884745    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:15.377890    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:17.378575    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:19.381787    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:21.881439    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:24.505044    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:26.888824    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:29.383447    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:31.877140    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:33.890478    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:35.892858    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:38.386979    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:40.882414    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:42.885905    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:45.375358    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:47.385666    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:49.889617    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:52.387721    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:54.398841    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:54.398841    9840 pod_ready.go:81] duration metric: took 4m0.1456719s waiting for pod "cilium-k65t2" in "kube-system" namespace to be "Ready" ...
	E1107 18:47:54.398841    9840 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1107 18:47:54.398841    9840 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-656749584-4fwnw" in "kube-system" namespace to be "Ready" ...
	I1107 18:47:54.411850    9840 pod_ready.go:92] pod "cilium-operator-656749584-4fwnw" in "kube-system" namespace has status "Ready":"True"
	I1107 18:47:54.411850    9840 pod_ready.go:81] duration metric: took 13.0088ms waiting for pod "cilium-operator-656749584-4fwnw" in "kube-system" namespace to be "Ready" ...
	I1107 18:47:54.411850    9840 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-pt6gp" in "kube-system" namespace to be "Ready" ...
	I1107 18:47:56.539497    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:58.976440    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:00.985954    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:03.468508    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:05.959209    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:07.978921    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:10.464757    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:12.978563    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:15.473338    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:17.971706    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:19.976874    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:22.480430    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:24.979776    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:27.473546    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:29.475886    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:31.960452    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:33.979955    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:36.472243    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:38.472333    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:40.476525    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:42.967753    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:44.972905    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:47.469326    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:49.481219    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:51.971109    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:54.481832    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:56.484399    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:59.055123    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:01.499998    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:03.976432    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:05.977605    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:08.462578    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:10.464463    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:12.473010    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:14.481532    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:16.967427    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:18.970734    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:20.976340    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:23.472946    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:25.966612    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:27.974749    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:29.977947    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:32.472569    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:34.975813    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:37.489386    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:39.980500    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:42.480448    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:44.968493    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:46.970992    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:48.973423    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:51.472552    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:53.479371    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:55.991740    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:58.471882    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:00.472171    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:02.483103    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:04.969253    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:06.971627    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:08.977920    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:10.978127    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:13.474350    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:15.544173    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:17.973719    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:20.463403    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:22.471864    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:24.963752    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:26.977688    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:29.465472    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:31.471096    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:33.477915    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:35.974216    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:37.989097    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:40.467382    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:42.481518    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:44.966199    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:46.983914    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:49.475923    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:51.979282    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:53.980237    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:56.471159    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:58.481038    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:00.486245    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:02.965395    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:04.969986    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:06.973846    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:09.473163    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:11.476538    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:13.987707    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:16.472954    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:18.473857    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:20.475685    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:22.969931    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:24.971749    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:26.973811    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:29.481175    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:31.973471    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:34.483428    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:36.978915    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:39.458169    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:41.468792    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:43.478725    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:45.481278    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:47.973886    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:50.494022    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:53.043518    9840 pod_ready.go:102] pod "coredns-565d847f94-pt6gp" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:54.529720    9840 pod_ready.go:81] duration metric: took 4m0.1152557s waiting for pod "coredns-565d847f94-pt6gp" in "kube-system" namespace to be "Ready" ...
	E1107 18:51:54.529780    9840 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1107 18:51:54.529838    9840 pod_ready.go:38] duration metric: took 8m0.3857104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 18:51:54.533213    9840 out.go:177] 
	W1107 18:51:54.535960    9840 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1107 18:51:54.536017    9840 out.go:239] * 
	* 
	W1107 18:51:54.538372    9840 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 18:51:54.543819    9840 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (583.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (596.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-182331 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-182331 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (9m55.9943056s)

                                                
                                                
-- stdout --
	* [calico-182331] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node calico-182331 in cluster calico-182331
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 18:44:22.262086    6656 out.go:296] Setting OutFile to fd 2044 ...
	I1107 18:44:22.344133    6656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:44:22.344133    6656 out.go:309] Setting ErrFile to fd 1716...
	I1107 18:44:22.344133    6656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:44:22.374676    6656 out.go:303] Setting JSON to false
	I1107 18:44:22.390688    6656 start.go:116] hostinfo: {"hostname":"minikube2","uptime":11299,"bootTime":1667835363,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 18:44:22.390688    6656 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 18:44:22.396664    6656 out.go:177] * [calico-182331] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 18:44:22.400695    6656 notify.go:220] Checking for updates...
	I1107 18:44:22.403679    6656 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:44:22.406683    6656 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 18:44:22.408692    6656 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 18:44:22.414678    6656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 18:44:22.418691    6656 config.go:180] Loaded profile config "cilium-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:22.418691    6656 config.go:180] Loaded profile config "kindnet-182329": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:22.419675    6656 config.go:180] Loaded profile config "newest-cni-184042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:22.419675    6656 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 18:44:22.817682    6656 docker.go:137] docker version: linux-20.10.20
	I1107 18:44:22.825682    6656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:44:23.644106    6656 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:44:23.0313363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:44:23.647086    6656 out.go:177] * Using the docker driver based on user configuration
	I1107 18:44:23.651097    6656 start.go:282] selected driver: docker
	I1107 18:44:23.651097    6656 start.go:808] validating driver "docker" against <nil>
	I1107 18:44:23.651097    6656 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 18:44:23.742108    6656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:44:24.532354    6656 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:44:23.954113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plugi
ns\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:44:24.533323    6656 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 18:44:24.534329    6656 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 18:44:24.538357    6656 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 18:44:24.542322    6656 cni.go:95] Creating CNI manager for "calico"
	I1107 18:44:24.542322    6656 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I1107 18:44:24.542322    6656 start_flags.go:317] config:
	{Name:calico-182331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:44:24.551344    6656 out.go:177] * Starting control plane node calico-182331 in cluster calico-182331
	I1107 18:44:24.554308    6656 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 18:44:24.557314    6656 out.go:177] * Pulling base image ...
	I1107 18:44:24.562306    6656 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:44:24.562306    6656 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 18:44:24.562306    6656 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 18:44:24.562306    6656 cache.go:57] Caching tarball of preloaded images
	I1107 18:44:24.563334    6656 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 18:44:24.563334    6656 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 18:44:24.563334    6656 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\config.json ...
	I1107 18:44:24.563334    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\config.json: {Name:mk24a85b6cc43618c499e71183b3d1751c6dbd5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:44:24.840294    6656 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 18:44:24.840294    6656 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 18:44:24.840294    6656 cache.go:208] Successfully downloaded all kic artifacts
	I1107 18:44:24.840294    6656 start.go:364] acquiring machines lock for calico-182331: {Name:mk3135a13f78e6746adaf3f45d06490c4a3cda26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 18:44:24.840294    6656 start.go:368] acquired machines lock for "calico-182331" in 0s
	I1107 18:44:24.840294    6656 start.go:93] Provisioning new machine with config: &{Name:calico-182331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:44:24.841290    6656 start.go:125] createHost starting for "" (driver="docker")
	I1107 18:44:24.844296    6656 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 18:44:24.845266    6656 start.go:159] libmachine.API.Create for "calico-182331" (driver="docker")
	I1107 18:44:24.845266    6656 client.go:168] LocalClient.Create starting
	I1107 18:44:24.848261    6656 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1107 18:44:24.849246    6656 main.go:134] libmachine: Decoding PEM data...
	I1107 18:44:24.849246    6656 main.go:134] libmachine: Parsing certificate...
	I1107 18:44:24.849246    6656 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1107 18:44:24.850249    6656 main.go:134] libmachine: Decoding PEM data...
	I1107 18:44:24.850249    6656 main.go:134] libmachine: Parsing certificate...
	I1107 18:44:24.865248    6656 cli_runner.go:164] Run: docker network inspect calico-182331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 18:44:25.135884    6656 cli_runner.go:211] docker network inspect calico-182331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 18:44:25.152878    6656 network_create.go:272] running [docker network inspect calico-182331] to gather additional debugging logs...
	I1107 18:44:25.152878    6656 cli_runner.go:164] Run: docker network inspect calico-182331
	W1107 18:44:25.421906    6656 cli_runner.go:211] docker network inspect calico-182331 returned with exit code 1
	I1107 18:44:25.421906    6656 network_create.go:275] error running [docker network inspect calico-182331]: docker network inspect calico-182331: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-182331
	I1107 18:44:25.421906    6656 network_create.go:277] output of [docker network inspect calico-182331]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-182331
	
	** /stderr **
	I1107 18:44:25.433899    6656 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 18:44:25.728888    6656 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8] misses:0}
	I1107 18:44:25.728888    6656 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:25.728888    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 18:44:25.748813    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	W1107 18:44:26.009757    6656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331 returned with exit code 1
	W1107 18:44:26.009757    6656 network_create.go:107] failed to create docker network calico-182331 192.168.49.0/24, will retry: subnet is taken
	I1107 18:44:26.030753    6656 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:false}} dirty:map[] misses:0}
	I1107 18:44:26.035150    6656 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.080845    6656 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0] misses:0}
	I1107 18:44:26.080927    6656 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.080927    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 18:44:26.093622    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	W1107 18:44:26.372859    6656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331 returned with exit code 1
	W1107 18:44:26.372859    6656 network_create.go:107] failed to create docker network calico-182331 192.168.58.0/24, will retry: subnet is taken
	I1107 18:44:26.399841    6656 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0] misses:1}
	I1107 18:44:26.399841    6656 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.424841    6656 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0] misses:1}
	I1107 18:44:26.424841    6656 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.424841    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 18:44:26.436851    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	W1107 18:44:26.672845    6656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331 returned with exit code 1
	W1107 18:44:26.672845    6656 network_create.go:107] failed to create docker network calico-182331 192.168.67.0/24, will retry: subnet is taken
	I1107 18:44:26.700844    6656 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0] misses:2}
	I1107 18:44:26.700844    6656 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.723839    6656 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0 192.168.76.0:0xc0005bc4e0] misses:2}
	I1107 18:44:26.723839    6656 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.723839    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1107 18:44:26.740863    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	W1107 18:44:27.022911    6656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331 returned with exit code 1
	W1107 18:44:27.022911    6656 network_create.go:107] failed to create docker network calico-182331 192.168.76.0/24, will retry: subnet is taken
	I1107 18:44:27.063107    6656 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0 192.168.76.0:0xc0005bc4e0] misses:3}
	I1107 18:44:27.064041    6656 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:27.094533    6656 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0 192.168.76.0:0xc0005bc4e0 192.168.85.0:0xc0005582b0] misses:3}
	I1107 18:44:27.094533    6656 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:27.094533    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1107 18:44:27.104533    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	I1107 18:44:27.579087    6656 network_create.go:99] docker network calico-182331 192.168.85.0/24 created
	I1107 18:44:27.579087    6656 kic.go:106] calculated static IP "192.168.85.2" for the "calico-182331" container
	I1107 18:44:27.609108    6656 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 18:44:27.982290    6656 cli_runner.go:164] Run: docker volume create calico-182331 --label name.minikube.sigs.k8s.io=calico-182331 --label created_by.minikube.sigs.k8s.io=true
	I1107 18:44:28.294548    6656 oci.go:103] Successfully created a docker volume calico-182331
	I1107 18:44:28.310515    6656 cli_runner.go:164] Run: docker run --rm --name calico-182331-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-182331 --entrypoint /usr/bin/test -v calico-182331:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 18:44:30.866118    6656 cli_runner.go:217] Completed: docker run --rm --name calico-182331-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-182331 --entrypoint /usr/bin/test -v calico-182331:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib: (2.5555758s)
	I1107 18:44:30.866118    6656 oci.go:107] Successfully prepared a docker volume calico-182331
	I1107 18:44:30.866118    6656 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:44:30.866118    6656 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 18:44:30.882104    6656 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-182331:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 18:45:03.135223    6656 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-182331:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (32.2527865s)
	I1107 18:45:03.135223    6656 kic.go:188] duration metric: took 32.268772 seconds to extract preloaded images to volume
	I1107 18:45:03.151124    6656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:45:03.820991    6656 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:60 SystemTime:2022-11-07 18:45:03.3099333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:45:03.834956    6656 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 18:45:04.471881    6656 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-182331 --name calico-182331 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-182331 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-182331 --network calico-182331 --ip 192.168.85.2 --volume calico-182331:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 18:45:06.284507    6656 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-182331 --name calico-182331 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-182331 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-182331 --network calico-182331 --ip 192.168.85.2 --volume calico-182331:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456: (1.8126063s)
	I1107 18:45:06.296517    6656 cli_runner.go:164] Run: docker container inspect calico-182331 --format={{.State.Running}}
	I1107 18:45:06.549509    6656 cli_runner.go:164] Run: docker container inspect calico-182331 --format={{.State.Status}}
	I1107 18:45:06.797355    6656 cli_runner.go:164] Run: docker exec calico-182331 stat /var/lib/dpkg/alternatives/iptables
	I1107 18:45:07.218097    6656 oci.go:144] the created container "calico-182331" has a running status.
	I1107 18:45:07.218097    6656 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa...
	I1107 18:45:07.334845    6656 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 18:45:07.710936    6656 cli_runner.go:164] Run: docker container inspect calico-182331 --format={{.State.Status}}
	I1107 18:45:07.976731    6656 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 18:45:07.976731    6656 kic_runner.go:114] Args: [docker exec --privileged calico-182331 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 18:45:08.390770    6656 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa...
	I1107 18:45:08.966328    6656 cli_runner.go:164] Run: docker container inspect calico-182331 --format={{.State.Status}}
	I1107 18:45:09.223859    6656 machine.go:88] provisioning docker machine ...
	I1107 18:45:09.224087    6656 ubuntu.go:169] provisioning hostname "calico-182331"
	I1107 18:45:09.233755    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:09.450261    6656 main.go:134] libmachine: Using SSH client type: native
	I1107 18:45:09.458262    6656 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 62061 <nil> <nil>}
	I1107 18:45:09.458262    6656 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-182331 && echo "calico-182331" | sudo tee /etc/hostname
	I1107 18:45:09.705479    6656 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-182331
	
	I1107 18:45:09.714565    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:09.941107    6656 main.go:134] libmachine: Using SSH client type: native
	I1107 18:45:09.942103    6656 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 62061 <nil> <nil>}
	I1107 18:45:09.942103    6656 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-182331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-182331/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-182331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 18:45:10.152140    6656 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 18:45:10.152140    6656 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1107 18:45:10.152140    6656 ubuntu.go:177] setting up certificates
	I1107 18:45:10.152140    6656 provision.go:83] configureAuth start
	I1107 18:45:10.162143    6656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-182331
	I1107 18:45:10.389638    6656 provision.go:138] copyHostCerts
	I1107 18:45:10.389638    6656 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1107 18:45:10.389638    6656 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1107 18:45:10.389638    6656 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1107 18:45:10.390620    6656 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1107 18:45:10.391619    6656 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1107 18:45:10.391619    6656 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1107 18:45:10.392605    6656 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1107 18:45:10.392605    6656 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1107 18:45:10.392605    6656 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1107 18:45:10.393617    6656 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-182331 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube calico-182331]
	I1107 18:45:10.859456    6656 provision.go:172] copyRemoteCerts
	I1107 18:45:10.875635    6656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 18:45:10.884670    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:11.099476    6656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62061 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa Username:docker}
	I1107 18:45:11.189523    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 18:45:11.260794    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 18:45:11.332287    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 18:45:11.400798    6656 provision.go:86] duration metric: configureAuth took 1.2486443s
	I1107 18:45:11.401156    6656 ubuntu.go:193] setting minikube options for container-runtime
	I1107 18:45:11.401156    6656 config.go:180] Loaded profile config "calico-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:45:11.411170    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:11.659377    6656 main.go:134] libmachine: Using SSH client type: native
	I1107 18:45:11.660373    6656 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 62061 <nil> <nil>}
	I1107 18:45:11.660373    6656 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 18:45:11.866527    6656 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 18:45:11.866527    6656 ubuntu.go:71] root file system type: overlay
	I1107 18:45:11.867530    6656 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 18:45:11.875524    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:12.114173    6656 main.go:134] libmachine: Using SSH client type: native
	I1107 18:45:12.115157    6656 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 62061 <nil> <nil>}
	I1107 18:45:12.116192    6656 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 18:45:12.342023    6656 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 18:45:12.354003    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:12.590997    6656 main.go:134] libmachine: Using SSH client type: native
	I1107 18:45:12.592011    6656 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xabbd60] 0xabece0 <nil>  [] 0s} 127.0.0.1 62061 <nil> <nil>}
	I1107 18:45:12.592011    6656 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 18:45:14.762190    6656 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 18:45:12.325031000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 18:45:14.762190    6656 machine.go:91] provisioned docker machine in 5.5380422s
	I1107 18:45:14.762190    6656 client.go:171] LocalClient.Create took 49.916399s
	I1107 18:45:14.762190    6656 start.go:167] duration metric: libmachine.API.Create for "calico-182331" took 49.916399s
	I1107 18:45:14.762190    6656 start.go:300] post-start starting for "calico-182331" (driver="docker")
	I1107 18:45:14.762190    6656 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 18:45:14.783190    6656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 18:45:14.792179    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:15.051808    6656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62061 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa Username:docker}
	I1107 18:45:15.208931    6656 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 18:45:15.219943    6656 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 18:45:15.219943    6656 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 18:45:15.220945    6656 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 18:45:15.220945    6656 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 18:45:15.220945    6656 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1107 18:45:15.220945    6656 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1107 18:45:15.221944    6656 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem -> 99482.pem in /etc/ssl/certs
	I1107 18:45:15.235961    6656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 18:45:15.275939    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem --> /etc/ssl/certs/99482.pem (1708 bytes)
	I1107 18:45:15.330485    6656 start.go:303] post-start completed in 567.7361ms
	I1107 18:45:15.349352    6656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-182331
	I1107 18:45:15.572356    6656 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\config.json ...
	I1107 18:45:15.596496    6656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 18:45:15.604510    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:15.822208    6656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62061 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa Username:docker}
	I1107 18:45:15.905665    6656 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 18:45:15.916655    6656 start.go:128] duration metric: createHost completed in 51.0748275s
	I1107 18:45:15.916655    6656 start.go:83] releasing machines lock for "calico-182331", held for 51.0758231s
	I1107 18:45:15.927562    6656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-182331
	I1107 18:45:16.172421    6656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 18:45:16.180236    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:16.181149    6656 ssh_runner.go:195] Run: systemctl --version
	I1107 18:45:16.188129    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:16.406201    6656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62061 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa Username:docker}
	I1107 18:45:16.421978    6656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62061 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa Username:docker}
	I1107 18:45:16.639086    6656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 18:45:16.666221    6656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1107 18:45:16.719030    6656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 18:45:16.913665    6656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 18:45:17.121987    6656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 18:45:17.152613    6656 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 18:45:17.162603    6656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 18:45:17.188629    6656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 18:45:17.253162    6656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 18:45:17.507479    6656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 18:45:17.696394    6656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 18:45:17.895154    6656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 18:45:20.029063    6656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.1338858s)
	I1107 18:45:20.045061    6656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 18:45:20.238286    6656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 18:45:20.436055    6656 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 18:45:20.483445    6656 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 18:45:20.496438    6656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 18:45:20.506445    6656 start.go:472] Will wait 60s for crictl version
	I1107 18:45:20.516444    6656 ssh_runner.go:195] Run: sudo crictl version
	I1107 18:45:20.609028    6656 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 18:45:20.618023    6656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 18:45:20.705028    6656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 18:45:20.796112    6656 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 18:45:20.813055    6656 cli_runner.go:164] Run: docker exec -t calico-182331 dig +short host.docker.internal
	I1107 18:45:21.245037    6656 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 18:45:21.255057    6656 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 18:45:21.267061    6656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 18:45:21.314480    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-182331
	I1107 18:45:21.512304    6656 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:45:21.520420    6656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 18:45:21.585879    6656 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 18:45:21.585879    6656 docker.go:543] Images already preloaded, skipping extraction
	I1107 18:45:21.592843    6656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 18:45:21.659035    6656 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 18:45:21.659035    6656 cache_images.go:84] Images are preloaded, skipping loading
	I1107 18:45:21.666062    6656 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 18:45:21.849316    6656 cni.go:95] Creating CNI manager for "calico"
	I1107 18:45:21.849316    6656 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 18:45:21.849316    6656 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-182331 NodeName:calico-182331 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 18:45:21.849316    6656 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "calico-182331"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 18:45:21.849316    6656 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-182331 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1107 18:45:21.861313    6656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 18:45:21.884315    6656 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 18:45:21.894310    6656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 18:45:21.914331    6656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I1107 18:45:21.961082    6656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 18:45:22.001514    6656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I1107 18:45:22.047505    6656 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1107 18:45:22.057510    6656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 18:45:22.084509    6656 certs.go:54] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331 for IP: 192.168.85.2
	I1107 18:45:22.084509    6656 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I1107 18:45:22.085518    6656 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I1107 18:45:22.085518    6656 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\client.key
	I1107 18:45:22.085518    6656 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\client.crt with IP's: []
	I1107 18:45:22.453302    6656 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\client.crt ...
	I1107 18:45:22.499149    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\client.crt: {Name:mke0a4c3ccfbb4b1379c90f74ab219fdb9ec2a89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:45:22.500155    6656 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\client.key ...
	I1107 18:45:22.500155    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\client.key: {Name:mk9710b9fa2a463568680b05bd832d72dc8d1697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:45:22.501161    6656 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.key.43b9df8c
	I1107 18:45:22.502170    6656 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 18:45:22.718016    6656 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.crt.43b9df8c ...
	I1107 18:45:22.719014    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.crt.43b9df8c: {Name:mkdf07b439a158ec6beef342a7eb7d506dfb6329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:45:22.720014    6656 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.key.43b9df8c ...
	I1107 18:45:22.720014    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.key.43b9df8c: {Name:mk32b7b09b217945cc5d3eb17023b352c5662315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:45:22.721015    6656 certs.go:320] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.crt.43b9df8c -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.crt
	I1107 18:45:22.727011    6656 certs.go:324] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.key.43b9df8c -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.key
	I1107 18:45:22.728200    6656 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\proxy-client.key
	I1107 18:45:22.729011    6656 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\proxy-client.crt with IP's: []
	I1107 18:45:23.070146    6656 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\proxy-client.crt ...
	I1107 18:45:23.070146    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\proxy-client.crt: {Name:mk68eb0001b22e9e904ff9ddc76586f9fc868208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:45:23.073137    6656 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\proxy-client.key ...
	I1107 18:45:23.073137    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\proxy-client.key: {Name:mkd245f275ea2c7f79c587e196444c056675b8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:45:23.081141    6656 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948.pem (1338 bytes)
	W1107 18:45:23.081141    6656 certs.go:384] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948_empty.pem, impossibly tiny 0 bytes
	I1107 18:45:23.081141    6656 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1107 18:45:23.081141    6656 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1107 18:45:23.081141    6656 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1107 18:45:23.081141    6656 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1107 18:45:23.082135    6656 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem (1708 bytes)
	I1107 18:45:23.083135    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 18:45:23.132149    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 18:45:23.193138    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 18:45:23.239162    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 18:45:23.296149    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 18:45:23.347161    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 18:45:23.401248    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 18:45:23.450743    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 18:45:23.502735    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 18:45:23.556756    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9948.pem --> /usr/share/ca-certificates/9948.pem (1338 bytes)
	I1107 18:45:23.611737    6656 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99482.pem --> /usr/share/ca-certificates/99482.pem (1708 bytes)
	I1107 18:45:23.667750    6656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 18:45:23.714748    6656 ssh_runner.go:195] Run: openssl version
	I1107 18:45:23.744754    6656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 18:45:23.785525    6656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:45:23.797804    6656 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:45:23.809776    6656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 18:45:23.880818    6656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 18:45:23.917793    6656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9948.pem && ln -fs /usr/share/ca-certificates/9948.pem /etc/ssl/certs/9948.pem"
	I1107 18:45:23.981641    6656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9948.pem
	I1107 18:45:24.000130    6656 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 17:01 /usr/share/ca-certificates/9948.pem
	I1107 18:45:24.009173    6656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9948.pem
	I1107 18:45:24.044788    6656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9948.pem /etc/ssl/certs/51391683.0"
	I1107 18:45:24.094238    6656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99482.pem && ln -fs /usr/share/ca-certificates/99482.pem /etc/ssl/certs/99482.pem"
	I1107 18:45:24.125220    6656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99482.pem
	I1107 18:45:24.138227    6656 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 17:01 /usr/share/ca-certificates/99482.pem
	I1107 18:45:24.158206    6656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99482.pem
	I1107 18:45:24.185257    6656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 18:45:24.232952    6656 kubeadm.go:396] StartCluster: {Name:calico-182331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:45:24.248911    6656 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 18:45:24.316655    6656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 18:45:24.385305    6656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 18:45:24.406206    6656 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 18:45:24.415823    6656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 18:45:24.452059    6656 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 18:45:24.452059    6656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 18:45:24.553480    6656 kubeadm.go:317] W1107 18:45:24.550273    1236 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 18:45:24.724606    6656 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 18:45:24.947278    6656 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 18:45:56.436515    6656 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 18:45:56.436715    6656 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 18:45:56.437086    6656 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 18:45:56.437643    6656 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 18:45:56.437917    6656 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 18:45:56.438244    6656 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 18:45:56.448518    6656 out.go:204]   - Generating certificates and keys ...
	I1107 18:45:56.449203    6656 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 18:45:56.449203    6656 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 18:45:56.449203    6656 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 18:45:56.449203    6656 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 18:45:56.449830    6656 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 18:45:56.449830    6656 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 18:45:56.449830    6656 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 18:45:56.449830    6656 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-182331 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1107 18:45:56.450554    6656 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 18:45:56.450813    6656 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-182331 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1107 18:45:56.450973    6656 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 18:45:56.451317    6656 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 18:45:56.451807    6656 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 18:45:56.451807    6656 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 18:45:56.451807    6656 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 18:45:56.451807    6656 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 18:45:56.451807    6656 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 18:45:56.451807    6656 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 18:45:56.452810    6656 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 18:45:56.452810    6656 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 18:45:56.452810    6656 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 18:45:56.452810    6656 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 18:45:56.457845    6656 out.go:204]   - Booting up control plane ...
	I1107 18:45:56.458558    6656 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 18:45:56.458679    6656 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 18:45:56.458851    6656 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 18:45:56.458851    6656 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 18:45:56.458851    6656 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 18:45:56.458851    6656 kubeadm.go:317] [apiclient] All control plane components are healthy after 22.505049 seconds
	I1107 18:45:56.459867    6656 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 18:45:56.459867    6656 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 18:45:56.459867    6656 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 18:45:56.459867    6656 kubeadm.go:317] [mark-control-plane] Marking the node calico-182331 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 18:45:56.460869    6656 kubeadm.go:317] [bootstrap-token] Using token: 1yuitv.a1ncy0dn2j4vt51o
	I1107 18:45:56.472429    6656 out.go:204]   - Configuring RBAC rules ...
	I1107 18:45:56.472429    6656 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 18:45:56.472429    6656 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 18:45:56.473484    6656 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 18:45:56.473638    6656 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 18:45:56.474102    6656 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 18:45:56.474102    6656 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 18:45:56.474599    6656 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 18:45:56.474599    6656 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1107 18:45:56.474599    6656 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1107 18:45:56.474599    6656 kubeadm.go:317] 
	I1107 18:45:56.474599    6656 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1107 18:45:56.474599    6656 kubeadm.go:317] 
	I1107 18:45:56.474599    6656 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1107 18:45:56.474599    6656 kubeadm.go:317] 
	I1107 18:45:56.474599    6656 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1107 18:45:56.475632    6656 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 18:45:56.475757    6656 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 18:45:56.475757    6656 kubeadm.go:317] 
	I1107 18:45:56.475869    6656 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1107 18:45:56.475869    6656 kubeadm.go:317] 
	I1107 18:45:56.476026    6656 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 18:45:56.476094    6656 kubeadm.go:317] 
	I1107 18:45:56.476094    6656 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1107 18:45:56.476354    6656 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 18:45:56.476354    6656 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 18:45:56.476354    6656 kubeadm.go:317] 
	I1107 18:45:56.476354    6656 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 18:45:56.476943    6656 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1107 18:45:56.477002    6656 kubeadm.go:317] 
	I1107 18:45:56.477126    6656 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 1yuitv.a1ncy0dn2j4vt51o \
	I1107 18:45:56.477126    6656 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:5ee7b05911e14fac42df88d6576770cfc35fa970444b7ab659b27324c22502ae \
	I1107 18:45:56.477126    6656 kubeadm.go:317] 	--control-plane 
	I1107 18:45:56.477126    6656 kubeadm.go:317] 
	I1107 18:45:56.477884    6656 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1107 18:45:56.477884    6656 kubeadm.go:317] 
	I1107 18:45:56.478153    6656 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 1yuitv.a1ncy0dn2j4vt51o \
	I1107 18:45:56.478221    6656 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:5ee7b05911e14fac42df88d6576770cfc35fa970444b7ab659b27324c22502ae 
	I1107 18:45:56.478221    6656 cni.go:95] Creating CNI manager for "calico"
	I1107 18:45:56.482109    6656 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1107 18:45:56.485094    6656 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1107 18:45:56.485094    6656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I1107 18:45:56.659657    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 18:46:00.886089    6656 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (4.2263864s)
	I1107 18:46:00.886089    6656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 18:46:00.898090    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:00.901094    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262 minikube.k8s.io/name=calico-182331 minikube.k8s.io/updated_at=2022_11_07T18_46_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:00.949619    6656 ops.go:34] apiserver oom_adj: -16
	I1107 18:46:01.183990    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:01.980578    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:02.477251    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:02.985923    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:03.493166    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:03.980133    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:04.475871    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:04.978942    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:05.492364    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:05.994711    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:06.479827    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:10.446557    6656 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.966655s)
	I1107 18:46:10.491941    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:12.594545    6656 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.1025813s)
	I1107 18:46:12.988136    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:13.978290    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:14.968722    6656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 18:46:15.347754    6656 kubeadm.go:1067] duration metric: took 14.4615074s to wait for elevateKubeSystemPrivileges.
	I1107 18:46:15.347754    6656 kubeadm.go:398] StartCluster complete in 51.1142455s
	I1107 18:46:15.347754    6656 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:46:15.347754    6656 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:46:15.349759    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:46:16.362715    6656 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-182331" rescaled to 1
	I1107 18:46:16.362715    6656 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:46:16.362715    6656 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I1107 18:46:16.365730    6656 out.go:177] * Verifying Kubernetes components...
	I1107 18:46:16.362715    6656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 18:46:16.362715    6656 addons.go:65] Setting storage-provisioner=true in profile "calico-182331"
	I1107 18:46:16.362715    6656 addons.go:65] Setting default-storageclass=true in profile "calico-182331"
	I1107 18:46:16.363719    6656 config.go:180] Loaded profile config "calico-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:46:16.369709    6656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-182331"
	I1107 18:46:16.369709    6656 addons.go:227] Setting addon storage-provisioner=true in "calico-182331"
	W1107 18:46:16.369709    6656 addons.go:236] addon storage-provisioner should already be in state true
	I1107 18:46:16.369709    6656 host.go:66] Checking if "calico-182331" exists ...
	I1107 18:46:16.384714    6656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 18:46:16.393724    6656 cli_runner.go:164] Run: docker container inspect calico-182331 --format={{.State.Status}}
	I1107 18:46:16.394700    6656 cli_runner.go:164] Run: docker container inspect calico-182331 --format={{.State.Status}}
	I1107 18:46:16.646767    6656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 18:46:16.650746    6656 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 18:46:16.650746    6656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 18:46:16.663729    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:46:16.675719    6656 addons.go:227] Setting addon default-storageclass=true in "calico-182331"
	W1107 18:46:16.675719    6656 addons.go:236] addon default-storageclass should already be in state true
	I1107 18:46:16.675719    6656 host.go:66] Checking if "calico-182331" exists ...
	I1107 18:46:16.693715    6656 cli_runner.go:164] Run: docker container inspect calico-182331 --format={{.State.Status}}
	I1107 18:46:16.749716    6656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 18:46:16.765729    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-182331
	I1107 18:46:16.959173    6656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62061 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa Username:docker}
	I1107 18:46:16.976719    6656 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 18:46:16.976719    6656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 18:46:16.984736    6656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-182331
	I1107 18:46:17.065756    6656 node_ready.go:35] waiting up to 5m0s for node "calico-182331" to be "Ready" ...
	I1107 18:46:17.147741    6656 node_ready.go:49] node "calico-182331" has status "Ready":"True"
	I1107 18:46:17.147741    6656 node_ready.go:38] duration metric: took 81.9846ms waiting for node "calico-182331" to be "Ready" ...
	I1107 18:46:17.147741    6656 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 18:46:17.181724    6656 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace to be "Ready" ...
	I1107 18:46:17.216713    6656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62061 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-182331\id_rsa Username:docker}
	I1107 18:46:17.769612    6656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 18:46:17.960812    6656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 18:46:19.439822    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:21.849890    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:24.241020    6656 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.491221s)
	I1107 18:46:24.241020    6656 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1107 18:46:24.352101    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:24.779105    6656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.0093118s)
	I1107 18:46:24.779297    6656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.8182659s)
	I1107 18:46:24.780666    6656 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 18:46:24.788825    6656 addons.go:488] enableAddons completed in 8.426017s
	I1107 18:46:26.777773    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:28.791729    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:31.338691    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:33.835823    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:35.847892    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:37.848322    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:39.850784    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:41.859601    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:44.335724    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:46.352342    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:48.353339    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:50.859321    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:53.344653    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:55.355548    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:57.357368    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:46:59.452200    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:01.856048    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:04.336164    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:06.785326    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:09.278181    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:11.351353    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:13.835689    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:16.340235    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:18.785544    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:20.791035    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:22.846149    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:24.850129    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:26.938582    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:29.352193    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:31.353535    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:33.851567    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:35.938115    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:38.352262    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:40.354438    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:42.791359    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:44.845552    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:46.856234    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:49.288737    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:51.789686    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:53.850089    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:56.281689    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:47:58.337643    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:00.791853    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:03.305931    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:05.801105    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:08.281313    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:10.288133    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:12.791753    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:14.845240    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:16.852631    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:19.336713    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:21.338486    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:23.342717    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:25.354847    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:27.857368    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:30.284246    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:32.451980    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:34.777109    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:36.853834    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:39.278092    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:41.350215    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:43.941294    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:46.287307    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:48.348452    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:50.437517    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:52.863745    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:55.353430    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:57.359012    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:48:59.840377    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:01.853846    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:04.340036    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:06.783136    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:08.855751    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:11.279086    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:13.294546    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:15.344577    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:17.356301    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:19.857176    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:22.338783    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:24.353757    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:26.777946    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:28.851640    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:30.858570    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:33.278813    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:35.348953    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:37.353519    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:39.852344    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:42.293801    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:44.352748    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:46.794176    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:48.804364    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:50.854023    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:53.345493    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:55.354688    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:57.355389    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:49:59.782015    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:01.840845    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:04.353979    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:06.848208    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:08.851030    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:11.358063    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:13.852644    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:16.351805    6656 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:17.543694    6656 pod_ready.go:81] duration metric: took 4m0.3593483s waiting for pod "calico-kube-controllers-7df895d496-jwvdb" in "kube-system" namespace to be "Ready" ...
	E1107 18:50:17.543922    6656 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1107 18:50:17.543922    6656 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-8dtf6" in "kube-system" namespace to be "Ready" ...
	I1107 18:50:19.678150    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:22.244142    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:24.674333    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:26.841253    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:29.254544    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:31.740370    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:33.758043    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:36.239361    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:38.239731    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:40.680826    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:42.682913    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:44.741908    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:46.764412    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:49.241416    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:51.705758    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:54.171861    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:56.238693    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:50:58.280682    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:00.742112    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:03.241107    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:05.753212    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:08.256817    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:10.683256    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:12.743766    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:15.172019    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:17.340879    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:19.744537    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:22.182288    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:24.184153    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:26.743929    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:29.176542    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:31.248334    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:33.682243    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:36.238395    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:38.242519    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:40.687586    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:43.183479    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:45.242278    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:47.254670    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:49.679387    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:51.685404    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:53.739008    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:55.758838    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:57.762553    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:51:59.844915    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:02.246388    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:04.741510    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:07.345864    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:10.199518    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:12.685894    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:18.266499    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:20.771464    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:23.181510    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:25.243477    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:27.254478    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:29.846293    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:32.240057    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:34.255464    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:36.688192    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:38.733721    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:40.842804    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:43.188510    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:45.256283    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:47.258759    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:49.747447    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:52.242452    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:54.243916    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:56.263105    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:52:58.679738    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:00.688381    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:03.261131    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:05.681701    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:07.742905    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:10.247808    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:12.678065    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:14.742043    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:16.761056    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:19.248021    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:21.683616    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:24.175721    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:26.178018    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:28.249397    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:30.743595    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:32.757964    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:35.248679    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:37.261557    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:39.750779    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:42.178341    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:44.189001    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:46.747706    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:49.244254    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:51.683979    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:54.175759    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:56.182719    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:53:58.743576    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:01.257300    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:03.746652    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:06.241422    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:08.675844    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:10.683111    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:13.178286    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:15.259150    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:17.757689    6656 pod_ready.go:102] pod "calico-node-8dtf6" in "kube-system" namespace has status "Ready":"False"
	I1107 18:54:17.851678    6656 pod_ready.go:81] duration metric: took 4m0.3051207s waiting for pod "calico-node-8dtf6" in "kube-system" namespace to be "Ready" ...
	E1107 18:54:17.852721    6656 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1107 18:54:17.852721    6656 pod_ready.go:38] duration metric: took 8m0.6997229s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 18:54:17.856669    6656 out.go:177] 
	W1107 18:54:17.860698    6656 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1107 18:54:17.860698    6656 out.go:239] * 
	* 
	W1107 18:54:17.863690    6656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 18:54:17.868689    6656 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (596.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (42.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-184042 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-184042 --alsologtostderr -v=1: exit status 80 (4.3383619s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-184042 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 18:44:40.273934    7896 out.go:296] Setting OutFile to fd 1556 ...
	I1107 18:44:40.358969    7896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:44:40.358969    7896 out.go:309] Setting ErrFile to fd 1772...
	I1107 18:44:40.358969    7896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:44:40.377925    7896 out.go:303] Setting JSON to false
	I1107 18:44:40.377925    7896 mustload.go:65] Loading cluster: newest-cni-184042
	I1107 18:44:40.377925    7896 config.go:180] Loaded profile config "newest-cni-184042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:40.402939    7896 cli_runner.go:164] Run: docker container inspect newest-cni-184042 --format={{.State.Status}}
	I1107 18:44:40.698960    7896 host.go:66] Checking if "newest-cni-184042" exists ...
	I1107 18:44:40.708926    7896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-184042
	I1107 18:44:40.985948    7896 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gate
s: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.28.0/minikube-v1.28.0-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.28.0-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=2621
44) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube2:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-184042 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 socket-vmnet-client-path:/opt/socket_vmnet/bin/socket_vmnet_client socket-vmnet-path:/var/run/socket_vmnet ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1107 18:44:40.993946    7896 out.go:177] * Pausing node newest-cni-184042 ... 
	I1107 18:44:40.995928    7896 host.go:66] Checking if "newest-cni-184042" exists ...
	I1107 18:44:41.006924    7896 ssh_runner.go:195] Run: systemctl --version
	I1107 18:44:41.014954    7896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-184042
	I1107 18:44:41.272927    7896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61904 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\newest-cni-184042\id_rsa Username:docker}
	I1107 18:44:41.567372    7896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 18:44:41.658573    7896 pause.go:51] kubelet running: true
	I1107 18:44:41.678565    7896 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1107 18:44:42.490153    7896 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I1107 18:44:42.602914    7896 docker.go:461] Pausing containers: [67d3fa8b323c 9afff48667c8 64661acfd1b3 48d53909123b e10c938414f0 5fef7f25a822 4f8569486774 05c970ce0ed6 190d28bb055e 89bd511a700e 73597d02e34d 5612c8b4c22f 0d4b0b6b0775]
	I1107 18:44:42.616264    7896 ssh_runner.go:195] Run: docker pause 67d3fa8b323c 9afff48667c8 64661acfd1b3 48d53909123b e10c938414f0 5fef7f25a822 4f8569486774 05c970ce0ed6 190d28bb055e 89bd511a700e 73597d02e34d 5612c8b4c22f 0d4b0b6b0775
	I1107 18:44:43.302893    7896 out.go:177] 
	W1107 18:44:43.307908    7896 out.go:239] X Exiting due to GUEST_PAUSE: pausing containers: docker: docker pause 67d3fa8b323c 9afff48667c8 64661acfd1b3 48d53909123b e10c938414f0 5fef7f25a822 4f8569486774 05c970ce0ed6 190d28bb055e 89bd511a700e 73597d02e34d 5612c8b4c22f 0d4b0b6b0775: Process exited with status 1
	stdout:
	9afff48667c8
	64661acfd1b3
	48d53909123b
	e10c938414f0
	5fef7f25a822
	4f8569486774
	05c970ce0ed6
	190d28bb055e
	89bd511a700e
	73597d02e34d
	5612c8b4c22f
	0d4b0b6b0775
	
	stderr:
	Error response from daemon: Container 67d3fa8b323c091c00757fb0088de742cfa3d0279dacf0e2b285f271956d6141 is not running
	
	X Exiting due to GUEST_PAUSE: pausing containers: docker: docker pause 67d3fa8b323c 9afff48667c8 64661acfd1b3 48d53909123b e10c938414f0 5fef7f25a822 4f8569486774 05c970ce0ed6 190d28bb055e 89bd511a700e 73597d02e34d 5612c8b4c22f 0d4b0b6b0775: Process exited with status 1
	stdout:
	9afff48667c8
	64661acfd1b3
	48d53909123b
	e10c938414f0
	5fef7f25a822
	4f8569486774
	05c970ce0ed6
	190d28bb055e
	89bd511a700e
	73597d02e34d
	5612c8b4c22f
	0d4b0b6b0775
	
	stderr:
	Error response from daemon: Container 67d3fa8b323c091c00757fb0088de742cfa3d0279dacf0e2b285f271956d6141 is not running
	
	W1107 18:44:43.307908    7896 out.go:239] * 
	* 
	W1107 18:44:44.208792    7896 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_33.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_33.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 18:44:44.219583    7896 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p newest-cni-184042 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-184042
helpers_test.go:235: (dbg) docker inspect newest-cni-184042:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72",
	        "Created": "2022-11-07T18:42:04.1283357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T18:43:41.7843919Z",
	            "FinishedAt": "2022-11-07T18:43:34.6903176Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72/hostname",
	        "HostsPath": "/var/lib/docker/containers/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72/hosts",
	        "LogPath": "/var/lib/docker/containers/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72-json.log",
	        "Name": "/newest-cni-184042",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-184042:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-184042",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0e94bde4fed69cc8d221bbea1ef99cba74ca62c4404e83c6f1f3428673fe59a5-init/diff:/var/lib/docker/overlay2/5ba40928978efc1ee3b35421e2a49e4e2a7d59d61b07bb8e461b5416c8a7cee7/diff:/var/lib/docker/overlay2/67e02326f2fb9638b3c744df240d022783ccecb7d0e13e0d4028b0f8bf17e69d/diff:/var/lib/docker/overlay2/2df41d3bee4190176a765702135566ea66b1390e8b91dfa86b8de2bce135a93a/diff:/var/lib/docker/overlay2/3ec94dbaa89905250e2398ca72e3bb9ff5dccddd8b415085183015f908fee35f/diff:/var/lib/docker/overlay2/3ff2e3a3d014a61bdc0a08d62538ff8c84667c0284decf8ecda52f68283ff0fb/diff:/var/lib/docker/overlay2/bec12fe29cd5fb8e9a7e5bb928cb25b20213dd7883f37ea7dd0a8e3bc0351052/diff:/var/lib/docker/overlay2/21c29267c8a16c82c45149aee257177584b1ce7c75fa787decd6c03a640936f7/diff:/var/lib/docker/overlay2/5552452888ed9ac6a45e539159cccc1e649ef7ad0bc04a4418eebab44d92e666/diff:/var/lib/docker/overlay2/3f5659bfc1d27650ea46807074a281c02900176a5f42ac3ce1101e612aea49a4/diff:/var/lib/docker/overlay2/95ed14
d67ee43712c9773f372551bf224bbcbf05234904cb75bfe650e5a9b431/diff:/var/lib/docker/overlay2/c61bea1335a18e64dabe990546948a49a1e791d643b48037370421d0751659c3/diff:/var/lib/docker/overlay2/4bceff48ae8e97fbcd073948091f9c7dbeadc230b98de67471c5522b9c386672/diff:/var/lib/docker/overlay2/23bacba3c342644af413c4af4dd2d246c778f3794857f6249648a877a053a59c/diff:/var/lib/docker/overlay2/b52423693db548690f91d1cd1a682e7dcffed995839ad13f0c371c2d681d58ae/diff:/var/lib/docker/overlay2/78ed02992e8d5b101283c1328bd5aaa12d7e0ca041f267cc87df49ef21d9bb03/diff:/var/lib/docker/overlay2/46157251f5db6a6570ed965e54b6f9c571885b984df59133027ccf004684e35b/diff:/var/lib/docker/overlay2/a7138fb69aba5dad874e92c39963591ac31b8c00283be1cef1f97bb03e29e95b/diff:/var/lib/docker/overlay2/c758e4b48f926dc6128c8daee3fc24a31cf68d0c856315d42cd496a0dbdd8539/diff:/var/lib/docker/overlay2/177fe0e8ee94dbc81e32cb39d5d299febe5bdcc240161d4b1835668fe03b5209/diff:/var/lib/docker/overlay2/f079d80f0588e1138baa92eb5c6d7f1bd3b748adbba870d85b973e09f3ebf494/diff:/var/lib/d
ocker/overlay2/c3813cada301ad2ba06f263b5ccf3e0b01ae80626c1d9caa7145c8b44f41463e/diff:/var/lib/docker/overlay2/72b362c3acbe525943f481d496d0727bf0f806a59448acd97435a15c292fef7e/diff:/var/lib/docker/overlay2/f3dae2918bbd88ecf6fa92ce58b695b5b7c2da5701725c4de1346a5152bfb602/diff:/var/lib/docker/overlay2/a9aa7189cf37379174133f86b5cd20db821dffd303a69bb90d8b837ef9314cae/diff:/var/lib/docker/overlay2/f2580cf4053e61b8bea5cd979c14376e4cb354a10cabb06928d54c1685d717ad/diff:/var/lib/docker/overlay2/935a0de03d362bfbb94f9caed18a864b47c082fd03de4bfa5ea3296602ab831a/diff:/var/lib/docker/overlay2/3cff685fb531dd4d8712d453d4acd726381268d9ddcd0c57a932182872cbf384/diff:/var/lib/docker/overlay2/112b35fd6eb67f7dfac734ed32e36fb98e01f15bd9c239c2f80d0bf851060ea4/diff:/var/lib/docker/overlay2/01282a02b23965342a99a1d1cc886e98e3cdc825c6ca80b04373c4406c9aa4f3/diff:/var/lib/docker/overlay2/bd54f122cc195ba2f796884b001defe75facaad0c89ccc34a6f6465aaa917fe9/diff:/var/lib/docker/overlay2/20dfd6c01cb2b243e552c3e422dd7b551e0db65fb0c630c438801d475ad
f77a1/diff:/var/lib/docker/overlay2/411ec7d4646f3c8ed6c04c781054e871311645faa7de90212e5c5454192092fd/diff:/var/lib/docker/overlay2/bb233cf9945b014c96c4bcbef2e9ef2f1e040f65910db652eb424af82e93768d/diff:/var/lib/docker/overlay2/a6de3a7d987b965f42f8379040ffd401aad9d38f67ac126754e8d62b555407aa/diff:/var/lib/docker/overlay2/b2ce15147e01c2b1eff488a0aec2cdcf950484589bf948d4b1f3a8a876232d09/diff:/var/lib/docker/overlay2/8a119f66dd46b7cc5f5ba77598b3979bf10ddf84081ea4872ec2ce3375d41684/diff:/var/lib/docker/overlay2/b3c7202a41b63567d929a27b911caefdba403bae7ea5f11b89f717ecb1013955/diff:/var/lib/docker/overlay2/d87eb4edb251e5b57913be1bf6653b8ad0988f5aefaf73d12984c2b91801af17/diff:/var/lib/docker/overlay2/df756f877bb755e1124e9ccaa62bd29d76f04822f12787db45118fcba1de223d/diff:/var/lib/docker/overlay2/ba2334ebb657af4b27997ce445bfc2ce0f740fb6fe3edba5a315042fd325a7d3/diff:/var/lib/docker/overlay2/ba4ef7e8994716049d65e5b49db39352db8c77cd45684b9516c827f4114572cb/diff:/var/lib/docker/overlay2/3df6d706ee5529d758e5ed38fd5b49f5733ae7
45d03cb146ad24eb8be305a2a3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e94bde4fed69cc8d221bbea1ef99cba74ca62c4404e83c6f1f3428673fe59a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e94bde4fed69cc8d221bbea1ef99cba74ca62c4404e83c6f1f3428673fe59a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e94bde4fed69cc8d221bbea1ef99cba74ca62c4404e83c6f1f3428673fe59a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-184042",
	                "Source": "/var/lib/docker/volumes/newest-cni-184042/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-184042",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-184042",
	                "name.minikube.sigs.k8s.io": "newest-cni-184042",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21cd2cc8629289eb66a03cf5ae29f213d1c89ea74df76eb8aaa67e7491d0374e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61904"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61905"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61907"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61908"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/21cd2cc86292",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-184042": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68a70e21c94a",
	                        "newest-cni-184042"
	                    ],
	                    "NetworkID": "e48d64473a88f312b9ba44f0f75d601aa1fe5a705ef8e87a426ad0bfa2769914",
	                    "EndpointID": "0e4d10ebbbecaa2b4da6df74b7f2b03613fe5133095bc139758adbb352fd5656",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-184042 -n newest-cni-184042

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-184042 -n newest-cni-184042: exit status 2 (1.9553931s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-184042 logs -n 25
E1107 18:44:49.409435    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-184042 logs -n 25: (15.5086339s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p embed-certs-182958                                      | embed-certs-182958           | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:41 GMT |
	| delete  | -p no-preload-182933                                       | no-preload-182933            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:40 GMT |
	| delete  | -p old-k8s-version-182839                                  | old-k8s-version-182839       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:40 GMT |
	| start   | -p newest-cni-184042 --memory=2200 --alsologtostderr       | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:43 GMT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |                   |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.25.3               |                              |                   |         |                     |                     |
	| delete  | -p no-preload-182933                                       | no-preload-182933            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:41 GMT |
	| start   | -p auto-182327 --memory=2048                               | auto-182327                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:43 GMT |
	|         | --alsologtostderr                                          |                              |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-182958                                      | embed-certs-182958           | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:41 GMT |
	| start   | -p kindnet-182329                                          | kindnet-182329               | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:43 GMT |
	|         | --memory=2048                                              |                              |                   |         |                     |                     |
	|         | --alsologtostderr                                          |                              |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                              |                   |         |                     |                     |
	|         | --cni=kindnet --driver=docker                              |                              |                   |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:41 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |                   |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:41 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:41 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:42 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:42 GMT | 07 Nov 22 18:42 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	| start   | -p cilium-182331 --memory=2048                             | cilium-182331                | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:42 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium                             |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-184042                 | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:43 GMT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |                   |         |                     |                     |
	| stop    | -p newest-cni-184042                                       | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:43 GMT |
	|         | --alsologtostderr -v=3                                     |                              |                   |         |                     |                     |
	| ssh     | -p auto-182327 pgrep -a                                    | auto-182327                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:43 GMT |
	|         | kubelet                                                    |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-184042                      | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:43 GMT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |                   |         |                     |                     |
	| start   | -p newest-cni-184042 --memory=2200 --alsologtostderr       | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:44 GMT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |                   |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.25.3               |                              |                   |         |                     |                     |
	| ssh     | -p kindnet-182329 pgrep -a                                 | kindnet-182329               | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT | 07 Nov 22 18:44 GMT |
	|         | kubelet                                                    |                              |                   |         |                     |                     |
	| delete  | -p auto-182327                                             | auto-182327                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT | 07 Nov 22 18:44 GMT |
	| start   | -p calico-182331 --memory=2048                             | calico-182331                | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=calico                             |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| ssh     | -p newest-cni-184042 sudo                                  | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT | 07 Nov 22 18:44 GMT |
	|         | crictl images -o json                                      |                              |                   |         |                     |                     |
	| pause   | -p newest-cni-184042                                       | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p kindnet-182329                                          | kindnet-182329               | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT |                     |
	|---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 18:44:22
	Running on machine: minikube2
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 18:44:22.262086    6656 out.go:296] Setting OutFile to fd 2044 ...
	I1107 18:44:22.344133    6656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:44:22.344133    6656 out.go:309] Setting ErrFile to fd 1716...
	I1107 18:44:22.344133    6656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:44:22.374676    6656 out.go:303] Setting JSON to false
	I1107 18:44:22.390688    6656 start.go:116] hostinfo: {"hostname":"minikube2","uptime":11299,"bootTime":1667835363,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 18:44:22.390688    6656 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 18:44:22.396664    6656 out.go:177] * [calico-182331] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 18:44:22.400695    6656 notify.go:220] Checking for updates...
	I1107 18:44:22.403679    6656 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:44:22.406683    6656 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 18:44:22.408692    6656 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 18:44:22.414678    6656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 18:44:18.158554    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 18:44:18.158554    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 18:44:18.465572    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:18.547574    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1107 18:44:18.547574    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1107 18:44:18.958576    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:20.949014    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 18:44:20.949014    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 18:44:20.958051    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:21.056032    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 18:44:21.056032    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 18:44:21.467042    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:21.721573    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 18:44:21.722580    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 18:44:21.958096    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:22.038140    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 18:44:22.038140    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 18:44:22.466681    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:22.545673    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 18:44:22.545673    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 18:44:22.418691    6656 config.go:180] Loaded profile config "cilium-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:22.418691    6656 config.go:180] Loaded profile config "kindnet-182329": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:22.419675    6656 config.go:180] Loaded profile config "newest-cni-184042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:22.419675    6656 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 18:44:22.817682    6656 docker.go:137] docker version: linux-20.10.20
	I1107 18:44:22.825682    6656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:44:23.644106    6656 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:44:23.0313363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:44:23.647086    6656 out.go:177] * Using the docker driver based on user configuration
	I1107 18:44:23.651097    6656 start.go:282] selected driver: docker
	I1107 18:44:23.651097    6656 start.go:808] validating driver "docker" against <nil>
	I1107 18:44:23.651097    6656 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 18:44:23.742108    6656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:44:24.532354    6656 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:61 SystemTime:2022-11-07 18:44:23.954113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plugi
ns\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:44:24.533323    6656 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 18:44:24.534329    6656 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 18:44:24.538357    6656 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 18:44:24.542322    6656 cni.go:95] Creating CNI manager for "calico"
	I1107 18:44:24.542322    6656 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I1107 18:44:24.542322    6656 start_flags.go:317] config:
	{Name:calico-182331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:44:24.551344    6656 out.go:177] * Starting control plane node calico-182331 in cluster calico-182331
	I1107 18:44:24.554308    6656 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 18:44:24.557314    6656 out.go:177] * Pulling base image ...
	I1107 18:44:24.562306    6656 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:44:24.562306    6656 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 18:44:24.562306    6656 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 18:44:24.562306    6656 cache.go:57] Caching tarball of preloaded images
	I1107 18:44:24.563334    6656 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 18:44:24.563334    6656 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 18:44:24.563334    6656 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\config.json ...
	I1107 18:44:24.563334    6656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-182331\config.json: {Name:mk24a85b6cc43618c499e71183b3d1751c6dbd5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:44:24.840294    6656 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 18:44:24.840294    6656 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 18:44:24.840294    6656 cache.go:208] Successfully downloaded all kic artifacts
	I1107 18:44:24.840294    6656 start.go:364] acquiring machines lock for calico-182331: {Name:mk3135a13f78e6746adaf3f45d06490c4a3cda26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 18:44:24.840294    6656 start.go:368] acquired machines lock for "calico-182331" in 0s
	I1107 18:44:24.840294    6656 start.go:93] Provisioning new machine with config: &{Name:calico-182331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-182331 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:44:24.841290    6656 start.go:125] createHost starting for "" (driver="docker")
	I1107 18:44:23.378588    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:25.381904    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:22.964691    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:23.057939    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 18:44:23.057939    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 18:44:23.461571    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:23.543129    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 18:44:23.543129    9208 api_server.go:102] status: https://127.0.0.1:61908/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 18:44:23.959851    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:24.052306    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 200:
	ok
	I1107 18:44:24.134301    9208 api_server.go:140] control plane version: v1.25.3
	I1107 18:44:24.134301    9208 api_server.go:130] duration metric: took 14.6859454s to wait for apiserver health ...
	I1107 18:44:24.134301    9208 cni.go:95] Creating CNI manager for ""
	I1107 18:44:24.134301    9208 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 18:44:24.134301    9208 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 18:44:24.173328    9208 system_pods.go:59] 8 kube-system pods found
	I1107 18:44:24.173328    9208 system_pods.go:61] "coredns-565d847f94-tss8d" [d512268b-5493-4a76-853d-e9f31400a7b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 18:44:24.173328    9208 system_pods.go:61] "etcd-newest-cni-184042" [0d715b98-2231-42a0-8573-c1d9eab5057f] Running
	I1107 18:44:24.173328    9208 system_pods.go:61] "kube-apiserver-newest-cni-184042" [ba764556-2b61-441c-a6a5-fd7e0bd8211a] Running
	I1107 18:44:24.173328    9208 system_pods.go:61] "kube-controller-manager-newest-cni-184042" [5dc88734-a27a-44c4-a274-9c4b4df99114] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 18:44:24.173328    9208 system_pods.go:61] "kube-proxy-ghl24" [777fe1ae-6e59-4ec8-be2d-2e48dd29ce38] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 18:44:24.173328    9208 system_pods.go:61] "kube-scheduler-newest-cni-184042" [61c3e37c-060d-498c-8dc6-0a71b7b0d54f] Running
	I1107 18:44:24.173328    9208 system_pods.go:61] "metrics-server-5c8fd5cf8-8zhxb" [2a3fbd51-1a7a-435b-85bc-88b33a7b6003] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 18:44:24.173328    9208 system_pods.go:61] "storage-provisioner" [1f50db4d-b73e-4903-aa55-96edc2ec2c37] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 18:44:24.173328    9208 system_pods.go:74] duration metric: took 39.0264ms to wait for pod list to return data ...
	I1107 18:44:24.173328    9208 node_conditions.go:102] verifying NodePressure condition ...
	I1107 18:44:24.278320    9208 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1107 18:44:24.279313    9208 node_conditions.go:123] node cpu capacity is 16
	I1107 18:44:24.279313    9208 node_conditions.go:105] duration metric: took 105.9839ms to run NodePressure ...
	I1107 18:44:24.279313    9208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 18:44:26.954912    9208 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.6746087s)
	I1107 18:44:26.954912    9208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 18:44:27.136557    9208 ops.go:34] apiserver oom_adj: -16
	I1107 18:44:27.136557    9208 kubeadm.go:631] restartCluster took 28.2265223s
	I1107 18:44:27.136557    9208 kubeadm.go:398] StartCluster complete in 28.3719567s
	I1107 18:44:27.136557    9208 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:44:27.137558    9208 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:44:27.140561    9208 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:44:27.345532    9208 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-184042" rescaled to 1
	I1107 18:44:27.345532    9208 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:44:27.348551    9208 out.go:177] * Verifying Kubernetes components...
	I1107 18:44:24.844296    6656 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 18:44:24.845266    6656 start.go:159] libmachine.API.Create for "calico-182331" (driver="docker")
	I1107 18:44:24.845266    6656 client.go:168] LocalClient.Create starting
	I1107 18:44:24.848261    6656 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1107 18:44:24.849246    6656 main.go:134] libmachine: Decoding PEM data...
	I1107 18:44:24.849246    6656 main.go:134] libmachine: Parsing certificate...
	I1107 18:44:24.849246    6656 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1107 18:44:24.850249    6656 main.go:134] libmachine: Decoding PEM data...
	I1107 18:44:24.850249    6656 main.go:134] libmachine: Parsing certificate...
	I1107 18:44:24.865248    6656 cli_runner.go:164] Run: docker network inspect calico-182331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 18:44:25.135884    6656 cli_runner.go:211] docker network inspect calico-182331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 18:44:25.152878    6656 network_create.go:272] running [docker network inspect calico-182331] to gather additional debugging logs...
	I1107 18:44:25.152878    6656 cli_runner.go:164] Run: docker network inspect calico-182331
	W1107 18:44:25.421906    6656 cli_runner.go:211] docker network inspect calico-182331 returned with exit code 1
	I1107 18:44:25.421906    6656 network_create.go:275] error running [docker network inspect calico-182331]: docker network inspect calico-182331: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-182331
	I1107 18:44:25.421906    6656 network_create.go:277] output of [docker network inspect calico-182331]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-182331
	
	** /stderr **
	I1107 18:44:25.433899    6656 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 18:44:25.728888    6656 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8] misses:0}
	I1107 18:44:25.728888    6656 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:25.728888    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 18:44:25.748813    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	W1107 18:44:26.009757    6656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331 returned with exit code 1
	W1107 18:44:26.009757    6656 network_create.go:107] failed to create docker network calico-182331 192.168.49.0/24, will retry: subnet is taken
	I1107 18:44:26.030753    6656 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:false}} dirty:map[] misses:0}
	I1107 18:44:26.035150    6656 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.080845    6656 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0] misses:0}
	I1107 18:44:26.080927    6656 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.080927    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 18:44:26.093622    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	W1107 18:44:26.372859    6656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331 returned with exit code 1
	W1107 18:44:26.372859    6656 network_create.go:107] failed to create docker network calico-182331 192.168.58.0/24, will retry: subnet is taken
	I1107 18:44:26.399841    6656 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0] misses:1}
	I1107 18:44:26.399841    6656 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.424841    6656 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0] misses:1}
	I1107 18:44:26.424841    6656 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.424841    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 18:44:26.436851    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	W1107 18:44:26.672845    6656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331 returned with exit code 1
	W1107 18:44:26.672845    6656 network_create.go:107] failed to create docker network calico-182331 192.168.67.0/24, will retry: subnet is taken
	I1107 18:44:26.700844    6656 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0] misses:2}
	I1107 18:44:26.700844    6656 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.723839    6656 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0 192.168.76.0:0xc0005bc4e0] misses:2}
	I1107 18:44:26.723839    6656 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:26.723839    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1107 18:44:26.740863    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	W1107 18:44:27.022911    6656 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331 returned with exit code 1
	W1107 18:44:27.022911    6656 network_create.go:107] failed to create docker network calico-182331 192.168.76.0/24, will retry: subnet is taken
	I1107 18:44:27.063107    6656 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0 192.168.76.0:0xc0005bc4e0] misses:3}
	I1107 18:44:27.064041    6656 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:27.094533    6656 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068c7f8] amended:true}} dirty:map[192.168.49.0:0xc00068c7f8 192.168.58.0:0xc00014b6d0 192.168.67.0:0xc00068c8c0 192.168.76.0:0xc0005bc4e0 192.168.85.0:0xc0005582b0] misses:3}
	I1107 18:44:27.094533    6656 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:27.094533    6656 network_create.go:115] attempt to create docker network calico-182331 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1107 18:44:27.104533    6656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-182331 calico-182331
	I1107 18:44:27.345532    9208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 18:44:27.345532    9208 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1107 18:44:27.346555    9208 config.go:180] Loaded profile config "newest-cni-184042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:27.352529    9208 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-184042"
	I1107 18:44:27.352529    9208 addons.go:65] Setting default-storageclass=true in profile "newest-cni-184042"
	I1107 18:44:27.352529    9208 addons.go:65] Setting metrics-server=true in profile "newest-cni-184042"
	I1107 18:44:27.352529    9208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-184042"
	I1107 18:44:27.352529    9208 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-184042"
	W1107 18:44:27.353554    9208 addons.go:236] addon storage-provisioner should already be in state true
	I1107 18:44:27.353554    9208 host.go:66] Checking if "newest-cni-184042" exists ...
	I1107 18:44:27.352529    9208 addons.go:227] Setting addon metrics-server=true in "newest-cni-184042"
	W1107 18:44:27.353554    9208 addons.go:236] addon metrics-server should already be in state true
	I1107 18:44:27.353554    9208 host.go:66] Checking if "newest-cni-184042" exists ...
	I1107 18:44:27.352529    9208 addons.go:65] Setting dashboard=true in profile "newest-cni-184042"
	I1107 18:44:27.354569    9208 addons.go:227] Setting addon dashboard=true in "newest-cni-184042"
	W1107 18:44:27.355529    9208 addons.go:236] addon dashboard should already be in state true
	I1107 18:44:27.356540    9208 host.go:66] Checking if "newest-cni-184042" exists ...
	I1107 18:44:27.381572    9208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 18:44:27.394577    9208 cli_runner.go:164] Run: docker container inspect newest-cni-184042 --format={{.State.Status}}
	I1107 18:44:27.395587    9208 cli_runner.go:164] Run: docker container inspect newest-cni-184042 --format={{.State.Status}}
	I1107 18:44:27.396588    9208 cli_runner.go:164] Run: docker container inspect newest-cni-184042 --format={{.State.Status}}
	I1107 18:44:27.397576    9208 cli_runner.go:164] Run: docker container inspect newest-cni-184042 --format={{.State.Status}}
	I1107 18:44:27.748086    9208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 18:44:27.752146    9208 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 18:44:27.752146    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 18:44:27.765097    9208 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1107 18:44:27.772096    9208 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I1107 18:44:27.768097    9208 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 18:44:27.772096    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 18:44:27.771107    9208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-184042
	I1107 18:44:27.789136    9208 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1107 18:44:27.792097    9208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-184042
	I1107 18:44:27.793098    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1107 18:44:27.793098    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1107 18:44:27.805097    9208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-184042
	I1107 18:44:27.391606    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:29.880326    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:27.579087    6656 network_create.go:99] docker network calico-182331 192.168.85.0/24 created
	I1107 18:44:27.579087    6656 kic.go:106] calculated static IP "192.168.85.2" for the "calico-182331" container
	I1107 18:44:27.609108    6656 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 18:44:27.982290    6656 cli_runner.go:164] Run: docker volume create calico-182331 --label name.minikube.sigs.k8s.io=calico-182331 --label created_by.minikube.sigs.k8s.io=true
	I1107 18:44:28.294548    6656 oci.go:103] Successfully created a docker volume calico-182331
	I1107 18:44:28.310515    6656 cli_runner.go:164] Run: docker run --rm --name calico-182331-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-182331 --entrypoint /usr/bin/test -v calico-182331:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 18:44:30.866118    6656 cli_runner.go:217] Completed: docker run --rm --name calico-182331-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-182331 --entrypoint /usr/bin/test -v calico-182331:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib: (2.5555758s)
	I1107 18:44:30.866118    6656 oci.go:107] Successfully prepared a docker volume calico-182331
	I1107 18:44:30.866118    6656 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:44:30.866118    6656 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 18:44:30.882104    6656 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-182331:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 18:44:27.892251    9208 addons.go:227] Setting addon default-storageclass=true in "newest-cni-184042"
	W1107 18:44:27.892251    9208 addons.go:236] addon default-storageclass should already be in state true
	I1107 18:44:27.892251    9208 host.go:66] Checking if "newest-cni-184042" exists ...
	I1107 18:44:27.939859    9208 cli_runner.go:164] Run: docker container inspect newest-cni-184042 --format={{.State.Status}}
	I1107 18:44:28.074495    9208 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 18:44:28.089490    9208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-184042
	I1107 18:44:28.118505    9208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61904 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\newest-cni-184042\id_rsa Username:docker}
	I1107 18:44:28.143510    9208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61904 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\newest-cni-184042\id_rsa Username:docker}
	I1107 18:44:28.159535    9208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61904 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\newest-cni-184042\id_rsa Username:docker}
	I1107 18:44:28.254522    9208 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 18:44:28.254522    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 18:44:28.270502    9208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-184042
	I1107 18:44:28.409508    9208 api_server.go:51] waiting for apiserver process to appear ...
	I1107 18:44:28.423512    9208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 18:44:28.442495    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1107 18:44:28.442495    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1107 18:44:28.541528    9208 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 18:44:28.541528    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I1107 18:44:28.563512    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 18:44:28.563512    9208 api_server.go:71] duration metric: took 1.2179669s to wait for apiserver process to appear ...
	I1107 18:44:28.563512    9208 api_server.go:87] waiting for apiserver healthz status ...
	I1107 18:44:28.563512    9208 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61908/healthz ...
	I1107 18:44:28.596510    9208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61904 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\newest-cni-184042\id_rsa Username:docker}
	I1107 18:44:28.645503    9208 api_server.go:278] https://127.0.0.1:61908/healthz returned 200:
	ok
	I1107 18:44:28.650522    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1107 18:44:28.650522    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1107 18:44:28.655538    9208 api_server.go:140] control plane version: v1.25.3
	I1107 18:44:28.655538    9208 api_server.go:130] duration metric: took 92.0249ms to wait for apiserver health ...
	I1107 18:44:28.655538    9208 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 18:44:28.750510    9208 system_pods.go:59] 8 kube-system pods found
	I1107 18:44:28.750510    9208 system_pods.go:61] "coredns-565d847f94-tss8d" [d512268b-5493-4a76-853d-e9f31400a7b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 18:44:28.750510    9208 system_pods.go:61] "etcd-newest-cni-184042" [0d715b98-2231-42a0-8573-c1d9eab5057f] Running
	I1107 18:44:28.750510    9208 system_pods.go:61] "kube-apiserver-newest-cni-184042" [ba764556-2b61-441c-a6a5-fd7e0bd8211a] Running
	I1107 18:44:28.750510    9208 system_pods.go:61] "kube-controller-manager-newest-cni-184042" [5dc88734-a27a-44c4-a274-9c4b4df99114] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 18:44:28.750510    9208 system_pods.go:61] "kube-proxy-ghl24" [777fe1ae-6e59-4ec8-be2d-2e48dd29ce38] Running
	I1107 18:44:28.750510    9208 system_pods.go:61] "kube-scheduler-newest-cni-184042" [61c3e37c-060d-498c-8dc6-0a71b7b0d54f] Running
	I1107 18:44:28.751523    9208 system_pods.go:61] "metrics-server-5c8fd5cf8-8zhxb" [2a3fbd51-1a7a-435b-85bc-88b33a7b6003] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 18:44:28.751523    9208 system_pods.go:61] "storage-provisioner" [1f50db4d-b73e-4903-aa55-96edc2ec2c37] Running
	I1107 18:44:28.751523    9208 system_pods.go:74] duration metric: took 95.9832ms to wait for pod list to return data ...
	I1107 18:44:28.751523    9208 default_sa.go:34] waiting for default service account to be created ...
	I1107 18:44:28.836510    9208 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 18:44:28.836510    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 18:44:28.942499    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1107 18:44:28.942499    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1107 18:44:28.956524    9208 default_sa.go:45] found service account: "default"
	I1107 18:44:28.956524    9208 default_sa.go:55] duration metric: took 204.9989ms for default service account to be created ...
	I1107 18:44:28.956524    9208 kubeadm.go:573] duration metric: took 1.6109739s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1107 18:44:28.956524    9208 node_conditions.go:102] verifying NodePressure condition ...
	I1107 18:44:28.968516    9208 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1107 18:44:28.968516    9208 node_conditions.go:123] node cpu capacity is 16
	I1107 18:44:28.968516    9208 node_conditions.go:105] duration metric: took 11.9918ms to run NodePressure ...
	I1107 18:44:28.968516    9208 start.go:217] waiting for startup goroutines ...
	I1107 18:44:29.040530    9208 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 18:44:29.040530    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 18:44:29.248523    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1107 18:44:29.248523    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I1107 18:44:29.275511    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 18:44:29.277517    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 18:44:29.454751    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1107 18:44:29.454751    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1107 18:44:29.769325    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1107 18:44:29.769325    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1107 18:44:30.638531    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1107 18:44:30.638531    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1107 18:44:30.834115    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1107 18:44:30.834115    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1107 18:44:30.966117    9208 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 18:44:30.966117    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1107 18:44:31.103519    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 18:44:33.949631    9208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.3851121s)
	I1107 18:44:34.064628    9208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.7870589s)
	I1107 18:44:34.064628    9208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.7890648s)
	I1107 18:44:34.064628    9208 addons.go:457] Verifying addon metrics-server=true in "newest-cni-184042"
	I1107 18:44:34.634199    9208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.5306413s)
	I1107 18:44:34.638238    9208 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-184042 addons enable metrics-server	
	
	
	I1107 18:44:34.646195    9208 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard
	I1107 18:44:34.649202    9208 addons.go:488] enableAddons completed in 7.3035905s
	I1107 18:44:34.673226    9208 ssh_runner.go:195] Run: rm -f paused
	I1107 18:44:34.957842    9208 start.go:506] kubectl: 1.18.2, cluster: 1.25.3 (minor skew: 7)
	I1107 18:44:34.959813    9208 out.go:177] 
	W1107 18:44:34.965797    9208 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.25.3.
	I1107 18:44:34.967797    9208 out.go:177]   - Want kubectl v1.25.3? Try 'minikube kubectl -- get pods -A'
	I1107 18:44:34.983439    9208 out.go:177] * Done! kubectl is now configured to use "newest-cni-184042" cluster and "default" namespace by default
	I1107 18:44:31.940516    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:33.941643    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:36.389300    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:38.881471    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:40.881937    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:43.135047    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:45.452545    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 18:43:42 UTC, end at Mon 2022-11-07 18:44:48 UTC. --
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.731013400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.790205400Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812441600Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812562400Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812580700Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812588900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812597100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812607500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.813158700Z" level=info msg="Loading containers: start."
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.473199300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.638661000Z" level=info msg="Loading containers: done."
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.713147100Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.713344100Z" level=info msg="Daemon has completed initialization"
	Nov 07 18:43:54 newest-cni-184042 systemd[1]: Started Docker Application Container Engine.
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.817658200Z" level=info msg="API listen on [::]:2376"
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.830677400Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 18:44:25 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:25.679479800Z" level=info msg="ignoring event" container=a811db2ecb49ef3dbf38ba909368c601f14a606b39ef634b1dabca46781c1a48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:26 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:26.130998300Z" level=info msg="ignoring event" container=247d038ebc9141d0c5d362658f83f64ce5d74801137725fd5a213522c6649dfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:31 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:31.440399200Z" level=info msg="ignoring event" container=a99d4278dad9a0068d6ada01c2c368b6a278c391e972b81eeb55695d506ddc30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:31 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:31.832861400Z" level=info msg="ignoring event" container=32cea441251ea7ba93ff7ae2a9ce4aa58c98486653ce4468835b1a8d8b907124 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:36 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:36.455550300Z" level=info msg="ignoring event" container=dcf345fa484f9d8a6dfbe1a7d2c218f8d01e6f0c56ae096af825eb6b6399fe14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:39 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:39.941225000Z" level=info msg="ignoring event" container=6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:41 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:41.536117800Z" level=info msg="ignoring event" container=67d3fa8b323c091c00757fb0088de742cfa3d0279dacf0e2b285f271956d6141 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:43 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:43.461288200Z" level=info msg="ignoring event" container=0f540d824e8b657c5a7a67339c195c3b4ed3576d869ecf9a5320698ae47c56d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:43 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:43.696341900Z" level=info msg="ignoring event" container=639167d71924bbe24e15a2f1de3e7b3df2943e8c4008ad81a61b1d09fc0a445b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	9afff48667c8b       6e38f40d628db       26 seconds ago       Running             storage-provisioner       1                   e10c938414f07
	64661acfd1b30       beaaf00edd38a       26 seconds ago       Running             kube-proxy                1                   48d53909123bb
	5fef7f25a8223       6d23ec0e8b87e       41 seconds ago       Running             kube-scheduler            1                   89bd511a700e6
	4f8569486774d       6039992312758       41 seconds ago       Running             kube-controller-manager   1                   5612c8b4c22fa
	05c970ce0ed6d       a8a176a5d5d69       41 seconds ago       Running             etcd                      1                   0d4b0b6b07752
	190d28bb055e9       0346dbd74bcb9       41 seconds ago       Running             kube-apiserver            1                   73597d02e34dd
	e948d3d88eef4       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   c1b4994a9895a
	4293c85e970a8       beaaf00edd38a       About a minute ago   Exited              kube-proxy                0                   1bcc8841821a7
	a2f4e92311dcc       a8a176a5d5d69       About a minute ago   Exited              etcd                      0                   1249b47787cf9
	fadafe8ae9b31       0346dbd74bcb9       About a minute ago   Exited              kube-apiserver            0                   6a33f03e1230f
	36b2a34cab3d5       6039992312758       About a minute ago   Exited              kube-controller-manager   0                   bc1849d07d22f
	c6d275ecae461       6d23ec0e8b87e       About a minute ago   Exited              kube-scheduler            0                   61950589a6ff0
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Nov 7 18:18] WSL2: Performing memory compaction.
	[Nov 7 18:19] WSL2: Performing memory compaction.
	[Nov 7 18:20] process 'docker/tmp/qemu-check426843351/check' started with executable stack
	[Nov 7 18:21] WSL2: Performing memory compaction.
	[Nov 7 18:23] WSL2: Performing memory compaction.
	[Nov 7 18:24] WSL2: Performing memory compaction.
	[Nov 7 18:27] WSL2: Performing memory compaction.
	[Nov 7 18:28] hrtimer: interrupt took 314000 ns
	[Nov 7 18:29] WSL2: Performing memory compaction.
	[Nov 7 18:40] WSL2: Performing memory compaction.
	[Nov 7 18:41] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [05c970ce0ed6] <==
	* {"level":"info","ts":"2022-11-07T18:44:33.824Z","caller":"traceutil/trace.go:171","msg":"trace[279524093] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:510; }","duration":"282.7199ms","start":"2022-11-07T18:44:33.541Z","end":"2022-11-07T18:44:33.824Z","steps":["trace[279524093] 'agreement among raft nodes before linearized reading'  (duration: 255.452ms)","trace[279524093] 'range keys from in-memory index tree'  (duration: 27.0664ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:44:36.363Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"416.8433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:44:36.364Z","caller":"traceutil/trace.go:171","msg":"trace[1645096699] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:527; }","duration":"417.1038ms","start":"2022-11-07T18:44:35.947Z","end":"2022-11-07T18:44:36.364Z","steps":["trace[1645096699] 'range keys from in-memory index tree'  (duration: 416.5294ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:36.364Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:44:35.947Z","time spent":"417.1923ms","remote":"127.0.0.1:44270","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-11-07T18:44:38.859Z","caller":"traceutil/trace.go:171","msg":"trace[1914344396] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"116.599ms","start":"2022-11-07T18:44:38.742Z","end":"2022-11-07T18:44:38.859Z","steps":["trace[1914344396] 'process raft request'  (duration: 100.5005ms)","trace[1914344396] 'compare'  (duration: 14.5202ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:44:39.032Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"163.3181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/newest-cni-184042\" ","response":"range_response_count:1 size:691"}
	{"level":"info","ts":"2022-11-07T18:44:39.032Z","caller":"traceutil/trace.go:171","msg":"trace[752696002] range","detail":"{range_begin:/registry/csinodes/newest-cni-184042; range_end:; response_count:1; response_revision:549; }","duration":"163.4409ms","start":"2022-11-07T18:44:38.869Z","end":"2022-11-07T18:44:39.032Z","steps":["trace[752696002] 'agreement among raft nodes before linearized reading'  (duration: 163.214ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.033Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"164.8942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/newest-cni-184042\" ","response":"range_response_count:1 size:571"}
	{"level":"warn","ts":"2022-11-07T18:44:39.033Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"165.8145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/metrics-server-5c8fd5cf8\" ","response":"range_response_count:1 size:3190"}
	{"level":"info","ts":"2022-11-07T18:44:39.033Z","caller":"traceutil/trace.go:171","msg":"trace[1011042921] range","detail":"{range_begin:/registry/leases/kube-node-lease/newest-cni-184042; range_end:; response_count:1; response_revision:549; }","duration":"164.968ms","start":"2022-11-07T18:44:38.868Z","end":"2022-11-07T18:44:39.033Z","steps":["trace[1011042921] 'agreement among raft nodes before linearized reading'  (duration: 164.2458ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:44:39.033Z","caller":"traceutil/trace.go:171","msg":"trace[881045888] range","detail":"{range_begin:/registry/replicasets/kube-system/metrics-server-5c8fd5cf8; range_end:; response_count:1; response_revision:549; }","duration":"165.8697ms","start":"2022-11-07T18:44:38.867Z","end":"2022-11-07T18:44:39.033Z","steps":["trace[881045888] 'agreement among raft nodes before linearized reading'  (duration: 165.1784ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.033Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"164.9754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2895"}
	{"level":"info","ts":"2022-11-07T18:44:39.033Z","caller":"traceutil/trace.go:171","msg":"trace[1123872363] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:549; }","duration":"165.0576ms","start":"2022-11-07T18:44:38.868Z","end":"2022-11-07T18:44:39.033Z","steps":["trace[1123872363] 'agreement among raft nodes before linearized reading'  (duration: 164.0015ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.033Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"166.3671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-565d847f94\" ","response":"range_response_count:1 size:3847"}
	{"level":"info","ts":"2022-11-07T18:44:39.033Z","caller":"traceutil/trace.go:171","msg":"trace[171905595] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-565d847f94; range_end:; response_count:1; response_revision:549; }","duration":"166.417ms","start":"2022-11-07T18:44:38.867Z","end":"2022-11-07T18:44:39.033Z","steps":["trace[171905595] 'agreement among raft nodes before linearized reading'  (duration: 165.078ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.132Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"184.3704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:44:39.132Z","caller":"traceutil/trace.go:171","msg":"trace[1950611678] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:551; }","duration":"184.8388ms","start":"2022-11-07T18:44:38.947Z","end":"2022-11-07T18:44:39.132Z","steps":["trace[1950611678] 'agreement among raft nodes before linearized reading'  (duration: 184.3153ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.151Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.1605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-184042\" ","response":"range_response_count:1 size:4576"}
	{"level":"info","ts":"2022-11-07T18:44:39.151Z","caller":"traceutil/trace.go:171","msg":"trace[47706546] range","detail":"{range_begin:/registry/minions/newest-cni-184042; range_end:; response_count:1; response_revision:557; }","duration":"104.3558ms","start":"2022-11-07T18:44:39.047Z","end":"2022-11-07T18:44:39.151Z","steps":["trace[47706546] 'agreement among raft nodes before linearized reading'  (duration: 104.1007ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.151Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.2877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-184042\" ","response":"range_response_count:1 size:4576"}
	{"level":"warn","ts":"2022-11-07T18:44:39.151Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.2945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-2zdmh\" ","response":"range_response_count:1 size:2792"}
	{"level":"info","ts":"2022-11-07T18:44:39.151Z","caller":"traceutil/trace.go:171","msg":"trace[1859811716] range","detail":"{range_begin:/registry/minions/newest-cni-184042; range_end:; response_count:1; response_revision:557; }","duration":"104.6207ms","start":"2022-11-07T18:44:39.047Z","end":"2022-11-07T18:44:39.151Z","steps":["trace[1859811716] 'agreement among raft nodes before linearized reading'  (duration: 104.2272ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:44:39.152Z","caller":"traceutil/trace.go:171","msg":"trace[1110657043] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-2zdmh; range_end:; response_count:1; response_revision:557; }","duration":"104.3762ms","start":"2022-11-07T18:44:39.047Z","end":"2022-11-07T18:44:39.151Z","steps":["trace[1110657043] 'agreement among raft nodes before linearized reading'  (duration: 104.2888ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.152Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.0275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5172"}
	{"level":"info","ts":"2022-11-07T18:44:39.152Z","caller":"traceutil/trace.go:171","msg":"trace[324921335] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:557; }","duration":"103.1992ms","start":"2022-11-07T18:44:39.049Z","end":"2022-11-07T18:44:39.152Z","steps":["trace[324921335] 'agreement among raft nodes before linearized reading'  (duration: 102.1711ms)"],"step_count":1}
	
	* 
	* ==> etcd [a2f4e92311dc] <==
	* {"level":"info","ts":"2022-11-07T18:43:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[663198329] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-184042; range_end:; response_count:1; response_revision:312; }","duration":"108.0056ms","start":"2022-11-07T18:43:19.545Z","end":"2022-11-07T18:43:19.653Z","steps":["trace[663198329] 'agreement among raft nodes before linearized reading'  (duration: 107.7658ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:43:19.653Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.2038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-newest-cni-184042\" ","response":"range_response_count:1 size:6894"}
	{"level":"info","ts":"2022-11-07T18:43:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[84253984] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-newest-cni-184042; range_end:; response_count:1; response_revision:312; }","duration":"108.2457ms","start":"2022-11-07T18:43:19.545Z","end":"2022-11-07T18:43:19.653Z","steps":["trace[84253984] 'agreement among raft nodes before linearized reading'  (duration: 108.1706ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:43:19.653Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.0138ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-184042\" ","response":"range_response_count:1 size:4273"}
	{"level":"info","ts":"2022-11-07T18:43:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[1378156960] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-184042; range_end:; response_count:1; response_revision:312; }","duration":"108.4403ms","start":"2022-11-07T18:43:19.545Z","end":"2022-11-07T18:43:19.653Z","steps":["trace[1378156960] 'agreement among raft nodes before linearized reading'  (duration: 107.9739ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:43:19.833Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.7424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-184042\" ","response":"range_response_count:1 size:4576"}
	{"level":"info","ts":"2022-11-07T18:43:19.833Z","caller":"traceutil/trace.go:171","msg":"trace[1148470588] range","detail":"{range_begin:/registry/minions/newest-cni-184042; range_end:; response_count:1; response_revision:317; }","duration":"101.9885ms","start":"2022-11-07T18:43:19.731Z","end":"2022-11-07T18:43:19.833Z","steps":["trace[1148470588] 'agreement among raft nodes before linearized reading'  (duration: 98.1015ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:43:26.691Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.5139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-565d847f94-v24jl\" ","response":"range_response_count:1 size:4516"}
	{"level":"info","ts":"2022-11-07T18:43:26.691Z","caller":"traceutil/trace.go:171","msg":"trace[661402472] range","detail":"{range_begin:/registry/pods/kube-system/coredns-565d847f94-v24jl; range_end:; response_count:1; response_revision:379; }","duration":"110.6711ms","start":"2022-11-07T18:43:26.580Z","end":"2022-11-07T18:43:26.691Z","steps":["trace[661402472] 'agreement among raft nodes before linearized reading'  (duration: 77.6481ms)","trace[661402472] 'range keys from in-memory index tree'  (duration: 32.5056ms)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T18:43:29.345Z","caller":"traceutil/trace.go:171","msg":"trace[472503022] linearizableReadLoop","detail":"{readStateIndex:409; appliedIndex:407; }","duration":"114.9573ms","start":"2022-11-07T18:43:29.230Z","end":"2022-11-07T18:43:29.345Z","steps":["trace[472503022] 'read index received'  (duration: 99.7149ms)","trace[472503022] 'applied index is now lower than readState.Index'  (duration: 15.2393ms)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T18:43:29.345Z","caller":"traceutil/trace.go:171","msg":"trace[375325545] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"115.182ms","start":"2022-11-07T18:43:29.230Z","end":"2022-11-07T18:43:29.345Z","steps":["trace[375325545] 'process raft request'  (duration: 114.8138ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:43:29.345Z","caller":"traceutil/trace.go:171","msg":"trace[510430390] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"157.1741ms","start":"2022-11-07T18:43:29.188Z","end":"2022-11-07T18:43:29.345Z","steps":["trace[510430390] 'process raft request'  (duration: 141.803ms)","trace[510430390] 'compare'  (duration: 14.7726ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:43:29.346Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.7666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:43:29.346Z","caller":"traceutil/trace.go:171","msg":"trace[1511392306] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:393; }","duration":"115.8568ms","start":"2022-11-07T18:43:29.230Z","end":"2022-11-07T18:43:29.346Z","steps":["trace[1511392306] 'agreement among raft nodes before linearized reading'  (duration: 115.7322ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:43:32.441Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-07T18:43:32.441Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-184042","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/11/07 18:43:32 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"warn","ts":"2022-11-07T18:43:32.548Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.7551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2022-11-07T18:43:32.548Z","caller":"traceutil/trace.go:171","msg":"trace[1419021998] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; }","duration":"111.9758ms","start":"2022-11-07T18:43:32.436Z","end":"2022-11-07T18:43:32.548Z","steps":["trace[1419021998] 'agreement among raft nodes before linearized reading'  (duration: 93.5661ms)"],"step_count":1}
	WARNING: 2022/11/07 18:43:32 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2022/11/07 18:43:32 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-07T18:43:32.643Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-11-07T18:43:32.742Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-11-07T18:43:32.744Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-11-07T18:43:32.744Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-184042","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:45:00 up  2:00,  0 users,  load average: 12.02, 10.91, 8.18
	Linux newest-cni-184042 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [190d28bb055e] <==
	* Trace[1709041045]: [1.8315127s] [1.8315127s] END
	I1107 18:44:20.965255       1 trace.go:205] Trace[767738611]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/metrics-server/token,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:3a3087ee-1e8a-449b-93d0-eb03badeea4f,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:44:19.150) (total time: 1815ms):
	Trace[767738611]: ---"Write to database call finished" len:162,err:<nil> 1814ms (18:44:20.965)
	Trace[767738611]: [1.8151154s] [1.8151154s] END
	I1107 18:44:20.965705       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 18:44:20.966096       1 trace.go:205] Trace[329695775]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:88dc4ee4-04a5-4477-b46e-b39faee654b6,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:44:19.150) (total time: 1815ms):
	Trace[329695775]: ---"Write to database call finished" len:151,err:<nil> 1815ms (18:44:20.965)
	Trace[329695775]: [1.8158174s] [1.8158174s] END
	I1107 18:44:20.967632       1 trace.go:205] Trace[366400045]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:e3614e84-b0d0-4baa-9c1e-63c70a6f3353,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:44:19.150) (total time: 1817ms):
	Trace[366400045]: ---"Write to database call finished" len:148,err:<nil> 1816ms (18:44:20.967)
	Trace[366400045]: [1.8175062s] [1.8175062s] END
	I1107 18:44:20.971021       1 trace.go:205] Trace[1174335176]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:f699a22e-77f6-4f47-a84e-fb9c68c8894e,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:44:19.150) (total time: 1820ms):
	Trace[1174335176]: ---"Write to database call finished" len:156,err:<nil> 1820ms (18:44:20.970)
	Trace[1174335176]: [1.8206758s] [1.8206758s] END
	I1107 18:44:25.467260       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1107 18:44:25.683034       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1107 18:44:26.246948       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1107 18:44:26.734165       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 18:44:26.840670       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 18:44:33.457152       1 controller.go:616] quota admission added evaluator for: namespaces
	I1107 18:44:34.472415       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.145.146]
	I1107 18:44:34.563939       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.104.63.82]
	I1107 18:44:38.453528       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I1107 18:44:38.635070       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 18:44:38.739137       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [fadafe8ae9b3] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:43:33.546606       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:43:33.546737       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:43:33.546989       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [36b2a34cab3d] <==
	* I1107 18:43:18.936449       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-newest-cni-184042" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1107 18:43:18.936807       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-newest-cni-184042" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1107 18:43:18.945908       1 range_allocator.go:367] Set node newest-cni-184042 PodCIDR to [192.168.0.0/24]
	I1107 18:43:18.972553       1 shared_informer.go:262] Caches are synced for disruption
	I1107 18:43:19.030156       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1107 18:43:19.044071       1 shared_informer.go:262] Caches are synced for endpoint
	I1107 18:43:19.044710       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1107 18:43:19.130259       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1107 18:43:19.130414       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:43:19.130624       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:43:19.237908       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1107 18:43:19.530742       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:43:19.530870       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 18:43:19.539389       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:43:19.840215       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I1107 18:43:19.934300       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ghl24"
	I1107 18:43:20.045224       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-v24jl"
	I1107 18:43:20.057094       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-tss8d"
	I1107 18:43:20.351235       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I1107 18:43:20.444202       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-v24jl"
	I1107 18:43:23.866530       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1107 18:43:29.480986       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I1107 18:43:29.541594       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c8fd5cf8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E1107 18:43:29.560393       1 replica_set.go:550] sync "kube-system/metrics-server-5c8fd5cf8" failed with pods "metrics-server-5c8fd5cf8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I1107 18:43:29.644191       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-8zhxb"
	
	* 
	* ==> kube-controller-manager [4f8569486774] <==
	* I1107 18:44:38.351832       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W1107 18:44:38.351915       1 node_lifecycle_controller.go:1058] Missing timestamp for Node newest-cni-184042. Assuming now as a timestamp.
	I1107 18:44:38.351976       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1107 18:44:38.432515       1 event.go:294] "Event occurred" object="newest-cni-184042" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-184042 event: Registered Node newest-cni-184042 in Controller"
	I1107 18:44:38.432864       1 shared_informer.go:262] Caches are synced for attach detach
	I1107 18:44:38.432900       1 shared_informer.go:262] Caches are synced for endpoint
	I1107 18:44:38.433707       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1107 18:44:38.433807       1 taint_manager.go:209] "Sending events to api server"
	I1107 18:44:38.433824       1 shared_informer.go:262] Caches are synced for namespace
	I1107 18:44:38.434338       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1107 18:44:38.435691       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	E1107 18:44:38.440162       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1107 18:44:38.451674       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1107 18:44:38.532460       1 shared_informer.go:262] Caches are synced for disruption
	I1107 18:44:38.532625       1 shared_informer.go:262] Caches are synced for stateful set
	I1107 18:44:38.532838       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:44:38.535837       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1107 18:44:38.539236       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:44:38.549025       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I1107 18:44:38.549185       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-57bbdc5f89 to 1"
	I1107 18:44:38.839645       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:44:38.860580       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:44:38.860618       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 18:44:38.940077       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-2zdmh"
	I1107 18:44:39.041268       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-57bbdc5f89" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-57bbdc5f89-6b6wx"
	
	* 
	* ==> kube-proxy [4293c85e970a] <==
	* I1107 18:43:23.929622       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1107 18:43:23.945374       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 18:43:23.949405       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 18:43:23.953354       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 18:43:23.957371       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1107 18:43:24.049075       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I1107 18:43:24.049227       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I1107 18:43:24.049274       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 18:43:24.235285       1 server_others.go:206] "Using iptables Proxier"
	I1107 18:43:24.235361       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 18:43:24.235383       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 18:43:24.235412       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 18:43:24.235447       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:43:24.235798       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:43:24.236116       1 server.go:661] "Version info" version="v1.25.3"
	I1107 18:43:24.236135       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 18:43:24.243737       1 config.go:317] "Starting service config controller"
	I1107 18:43:24.243765       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 18:43:24.243976       1 config.go:226] "Starting endpoint slice config controller"
	I1107 18:43:24.243993       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 18:43:24.244473       1 config.go:444] "Starting node config controller"
	I1107 18:43:24.244490       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 18:43:24.345807       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 18:43:24.345972       1 shared_informer.go:262] Caches are synced for service config
	I1107 18:43:24.346427       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [64661acfd1b3] <==
	* I1107 18:44:25.561018       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1107 18:44:25.568385       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 18:44:25.633319       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 18:44:25.637594       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 18:44:25.640959       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1107 18:44:25.771511       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I1107 18:44:25.771684       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I1107 18:44:25.771728       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 18:44:26.037817       1 server_others.go:206] "Using iptables Proxier"
	I1107 18:44:26.037954       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 18:44:26.037977       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 18:44:26.038006       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 18:44:26.038050       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:44:26.041120       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:44:26.042873       1 server.go:661] "Version info" version="v1.25.3"
	I1107 18:44:26.042897       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 18:44:26.046639       1 config.go:444] "Starting node config controller"
	I1107 18:44:26.046680       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 18:44:26.046738       1 config.go:317] "Starting service config controller"
	I1107 18:44:26.046749       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 18:44:26.046786       1 config.go:226] "Starting endpoint slice config controller"
	I1107 18:44:26.047068       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 18:44:26.148112       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 18:44:26.148150       1 shared_informer.go:262] Caches are synced for node config
	I1107 18:44:26.148255       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [5fef7f25a822] <==
	* E1107 18:44:18.545294       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 18:44:18.545323       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 18:44:18.545180       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 18:44:18.545461       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 18:44:18.545336       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 18:44:18.545588       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1107 18:44:18.545762       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 18:44:18.546052       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1107 18:44:18.545525       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 18:44:18.546255       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1107 18:44:18.546394       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 18:44:18.546432       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 18:44:18.547017       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E1107 18:44:18.547065       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W1107 18:44:18.547197       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 18:44:18.547226       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 18:44:18.630730       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 18:44:18.633034       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 18:44:18.632459       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1107 18:44:18.633893       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1107 18:44:18.632445       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1107 18:44:18.633946       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W1107 18:44:18.632609       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1107 18:44:18.633984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	I1107 18:44:19.740608       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c6d275ecae46] <==
	* W1107 18:43:03.583043       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 18:43:03.583120       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1107 18:43:03.645261       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 18:43:03.645388       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1107 18:43:03.763449       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 18:43:03.763576       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 18:43:03.764184       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1107 18:43:03.764340       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 18:43:03.788137       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 18:43:03.788278       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1107 18:43:03.859524       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 18:43:03.859684       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 18:43:03.886661       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 18:43:03.886785       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 18:43:03.936713       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 18:43:03.937019       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 18:43:03.937029       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 18:43:03.937056       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 18:43:03.999746       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 18:43:03.999870       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1107 18:43:05.852204       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 18:43:32.440091       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1107 18:43:32.440299       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1107 18:43:32.440394       1 run.go:74] "command failed" err="finished without leader elect"
	E1107 18:43:32.440460       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 18:43:42 UTC, end at Mon 2022-11-07 18:45:01 UTC. --
	Nov 07 18:44:39 newest-cni-184042 kubelet[1218]: I1107 18:44:39.235064    1218 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c038d127-fbe8-4e1b-9129-582d53346cf1-tmp-volume\") pod \"dashboard-metrics-scraper-7b94984548-2zdmh\" (UID: \"c038d127-fbe8-4e1b-9129-582d53346cf1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-2zdmh"
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: E1107 18:44:40.365802    1218 remote_runtime.go:233] "RunPodSandbox from runtime service failed" err=<
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         rpc error: code = Unknown desc = [failed to set up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-09926e19ffeb12bf76f4adcc -m comment --comment name: "crio" id: "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-09926e19ffeb12bf76f4adcc':No such file or directory
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         Try `iptables -h' or 'iptables --help' for more information.
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         ]
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:  >
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: E1107 18:44:40.365990    1218 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=<
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         rpc error: code = Unknown desc = [failed to set up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-09926e19ffeb12bf76f4adcc -m comment --comment name: "crio" id: "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-09926e19ffeb12bf76f4adcc':No such file or directory
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         Try `iptables -h' or 'iptables --help' for more information.
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         ]
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:  > pod="kube-system/metrics-server-5c8fd5cf8-8zhxb"
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: E1107 18:44:40.366040    1218 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err=<
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         rpc error: code = Unknown desc = [failed to set up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-09926e19ffeb12bf76f4adcc -m comment --comment name: "crio" id: "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-09926e19ffeb12bf76f4adcc':No such file or directory
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         Try `iptables -h' or 'iptables --help' for more information.
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         ]
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:  > pod="kube-system/metrics-server-5c8fd5cf8-8zhxb"
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: E1107 18:44:40.366215    1218 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c8fd5cf8-8zhxb_kube-system(2a3fbd51-1a7a-435b-85bc-88b33a7b6003)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c8fd5cf8-8zhxb_kube-system(2a3fbd51-1a7a-435b-85bc-88b33a7b6003)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185\\\" network for pod \\\"metrics-server-5c8fd5cf8-8zhxb\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c8fd5cf8-8zhxb_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185\\\" network for pod \\\"metrics-server-5c8fd5cf8-8zhxb\\\": networkPlugin cni failed to teardown pod \\\"metrics-serv
er-5c8fd5cf8-8zhxb_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-09926e19ffeb12bf76f4adcc -m comment --comment name: \\\"crio\\\" id: \\\"6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-09926e19ffeb12bf76f4adcc':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c8fd5cf8-8zhxb" podUID=2a3fbd51-1a7a-435b-85bc-88b33a7b6003
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: I1107 18:44:40.832023    1218 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="67d3fa8b323c091c00757fb0088de742cfa3d0279dacf0e2b285f271956d6141"
	Nov 07 18:44:42 newest-cni-184042 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Nov 07 18:44:42 newest-cni-184042 kubelet[1218]: I1107 18:44:42.325626    1218 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 07 18:44:42 newest-cni-184042 systemd[1]: kubelet.service: Succeeded.
	Nov 07 18:44:42 newest-cni-184042 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [9afff48667c8] <==
	* I1107 18:44:25.343672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	* 
	* ==> storage-provisioner [e948d3d88eef] <==
	* I1107 18:43:28.531319       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 18:43:28.567113       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 18:43:28.567330       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 18:43:28.657798       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 18:43:28.659819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-184042_8a2e9372-91c3-47a2-98b8-088b4a85e714!
	I1107 18:43:28.658141       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae84d063-8899-4080-88e9-43f9f4572c37", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-184042_8a2e9372-91c3-47a2-98b8-088b4a85e714 became leader
	I1107 18:43:28.764031       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-184042_8a2e9372-91c3-47a2-98b8-088b4a85e714!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 18:45:00.691318    2416 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-184042 -n newest-cni-184042
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-184042 -n newest-cni-184042: exit status 2 (1.6603647s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-184042" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-184042
helpers_test.go:235: (dbg) docker inspect newest-cni-184042:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72",
	        "Created": "2022-11-07T18:42:04.1283357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T18:43:41.7843919Z",
	            "FinishedAt": "2022-11-07T18:43:34.6903176Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72/hostname",
	        "HostsPath": "/var/lib/docker/containers/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72/hosts",
	        "LogPath": "/var/lib/docker/containers/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72/68a70e21c94ab4c8ac527f08a6d04f973a1d9b545ff7e8e9615099f34bfc8a72-json.log",
	        "Name": "/newest-cni-184042",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-184042:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-184042",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0e94bde4fed69cc8d221bbea1ef99cba74ca62c4404e83c6f1f3428673fe59a5-init/diff:/var/lib/docker/overlay2/5ba40928978efc1ee3b35421e2a49e4e2a7d59d61b07bb8e461b5416c8a7cee7/diff:/var/lib/docker/overlay2/67e02326f2fb9638b3c744df240d022783ccecb7d0e13e0d4028b0f8bf17e69d/diff:/var/lib/docker/overlay2/2df41d3bee4190176a765702135566ea66b1390e8b91dfa86b8de2bce135a93a/diff:/var/lib/docker/overlay2/3ec94dbaa89905250e2398ca72e3bb9ff5dccddd8b415085183015f908fee35f/diff:/var/lib/docker/overlay2/3ff2e3a3d014a61bdc0a08d62538ff8c84667c0284decf8ecda52f68283ff0fb/diff:/var/lib/docker/overlay2/bec12fe29cd5fb8e9a7e5bb928cb25b20213dd7883f37ea7dd0a8e3bc0351052/diff:/var/lib/docker/overlay2/21c29267c8a16c82c45149aee257177584b1ce7c75fa787decd6c03a640936f7/diff:/var/lib/docker/overlay2/5552452888ed9ac6a45e539159cccc1e649ef7ad0bc04a4418eebab44d92e666/diff:/var/lib/docker/overlay2/3f5659bfc1d27650ea46807074a281c02900176a5f42ac3ce1101e612aea49a4/diff:/var/lib/docker/overlay2/95ed14
d67ee43712c9773f372551bf224bbcbf05234904cb75bfe650e5a9b431/diff:/var/lib/docker/overlay2/c61bea1335a18e64dabe990546948a49a1e791d643b48037370421d0751659c3/diff:/var/lib/docker/overlay2/4bceff48ae8e97fbcd073948091f9c7dbeadc230b98de67471c5522b9c386672/diff:/var/lib/docker/overlay2/23bacba3c342644af413c4af4dd2d246c778f3794857f6249648a877a053a59c/diff:/var/lib/docker/overlay2/b52423693db548690f91d1cd1a682e7dcffed995839ad13f0c371c2d681d58ae/diff:/var/lib/docker/overlay2/78ed02992e8d5b101283c1328bd5aaa12d7e0ca041f267cc87df49ef21d9bb03/diff:/var/lib/docker/overlay2/46157251f5db6a6570ed965e54b6f9c571885b984df59133027ccf004684e35b/diff:/var/lib/docker/overlay2/a7138fb69aba5dad874e92c39963591ac31b8c00283be1cef1f97bb03e29e95b/diff:/var/lib/docker/overlay2/c758e4b48f926dc6128c8daee3fc24a31cf68d0c856315d42cd496a0dbdd8539/diff:/var/lib/docker/overlay2/177fe0e8ee94dbc81e32cb39d5d299febe5bdcc240161d4b1835668fe03b5209/diff:/var/lib/docker/overlay2/f079d80f0588e1138baa92eb5c6d7f1bd3b748adbba870d85b973e09f3ebf494/diff:/var/lib/d
ocker/overlay2/c3813cada301ad2ba06f263b5ccf3e0b01ae80626c1d9caa7145c8b44f41463e/diff:/var/lib/docker/overlay2/72b362c3acbe525943f481d496d0727bf0f806a59448acd97435a15c292fef7e/diff:/var/lib/docker/overlay2/f3dae2918bbd88ecf6fa92ce58b695b5b7c2da5701725c4de1346a5152bfb602/diff:/var/lib/docker/overlay2/a9aa7189cf37379174133f86b5cd20db821dffd303a69bb90d8b837ef9314cae/diff:/var/lib/docker/overlay2/f2580cf4053e61b8bea5cd979c14376e4cb354a10cabb06928d54c1685d717ad/diff:/var/lib/docker/overlay2/935a0de03d362bfbb94f9caed18a864b47c082fd03de4bfa5ea3296602ab831a/diff:/var/lib/docker/overlay2/3cff685fb531dd4d8712d453d4acd726381268d9ddcd0c57a932182872cbf384/diff:/var/lib/docker/overlay2/112b35fd6eb67f7dfac734ed32e36fb98e01f15bd9c239c2f80d0bf851060ea4/diff:/var/lib/docker/overlay2/01282a02b23965342a99a1d1cc886e98e3cdc825c6ca80b04373c4406c9aa4f3/diff:/var/lib/docker/overlay2/bd54f122cc195ba2f796884b001defe75facaad0c89ccc34a6f6465aaa917fe9/diff:/var/lib/docker/overlay2/20dfd6c01cb2b243e552c3e422dd7b551e0db65fb0c630c438801d475ad
f77a1/diff:/var/lib/docker/overlay2/411ec7d4646f3c8ed6c04c781054e871311645faa7de90212e5c5454192092fd/diff:/var/lib/docker/overlay2/bb233cf9945b014c96c4bcbef2e9ef2f1e040f65910db652eb424af82e93768d/diff:/var/lib/docker/overlay2/a6de3a7d987b965f42f8379040ffd401aad9d38f67ac126754e8d62b555407aa/diff:/var/lib/docker/overlay2/b2ce15147e01c2b1eff488a0aec2cdcf950484589bf948d4b1f3a8a876232d09/diff:/var/lib/docker/overlay2/8a119f66dd46b7cc5f5ba77598b3979bf10ddf84081ea4872ec2ce3375d41684/diff:/var/lib/docker/overlay2/b3c7202a41b63567d929a27b911caefdba403bae7ea5f11b89f717ecb1013955/diff:/var/lib/docker/overlay2/d87eb4edb251e5b57913be1bf6653b8ad0988f5aefaf73d12984c2b91801af17/diff:/var/lib/docker/overlay2/df756f877bb755e1124e9ccaa62bd29d76f04822f12787db45118fcba1de223d/diff:/var/lib/docker/overlay2/ba2334ebb657af4b27997ce445bfc2ce0f740fb6fe3edba5a315042fd325a7d3/diff:/var/lib/docker/overlay2/ba4ef7e8994716049d65e5b49db39352db8c77cd45684b9516c827f4114572cb/diff:/var/lib/docker/overlay2/3df6d706ee5529d758e5ed38fd5b49f5733ae7
45d03cb146ad24eb8be305a2a3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e94bde4fed69cc8d221bbea1ef99cba74ca62c4404e83c6f1f3428673fe59a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e94bde4fed69cc8d221bbea1ef99cba74ca62c4404e83c6f1f3428673fe59a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e94bde4fed69cc8d221bbea1ef99cba74ca62c4404e83c6f1f3428673fe59a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-184042",
	                "Source": "/var/lib/docker/volumes/newest-cni-184042/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-184042",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-184042",
	                "name.minikube.sigs.k8s.io": "newest-cni-184042",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21cd2cc8629289eb66a03cf5ae29f213d1c89ea74df76eb8aaa67e7491d0374e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61904"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61905"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61907"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61908"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/21cd2cc86292",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-184042": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68a70e21c94a",
	                        "newest-cni-184042"
	                    ],
	                    "NetworkID": "e48d64473a88f312b9ba44f0f75d601aa1fe5a705ef8e87a426ad0bfa2769914",
	                    "EndpointID": "0e4d10ebbbecaa2b4da6df74b7f2b03613fe5133095bc139758adbb352fd5656",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-184042 -n newest-cni-184042
E1107 18:45:05.785272    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-184042 -n newest-cni-184042: exit status 2 (1.7807999s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-184042 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-184042 logs -n 25: (14.6706131s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p no-preload-182933                                       | no-preload-182933            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:40 GMT |
	| delete  | -p old-k8s-version-182839                                  | old-k8s-version-182839       | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:40 GMT |
	| start   | -p newest-cni-184042 --memory=2200 --alsologtostderr       | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:43 GMT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |                   |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.25.3               |                              |                   |         |                     |                     |
	| delete  | -p no-preload-182933                                       | no-preload-182933            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:40 GMT | 07 Nov 22 18:41 GMT |
	| start   | -p auto-182327 --memory=2048                               | auto-182327                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:43 GMT |
	|         | --alsologtostderr                                          |                              |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-182958                                      | embed-certs-182958           | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:41 GMT |
	| start   | -p kindnet-182329                                          | kindnet-182329               | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:43 GMT |
	|         | --memory=2048                                              |                              |                   |         |                     |                     |
	|         | --alsologtostderr                                          |                              |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                              |                   |         |                     |                     |
	|         | --cni=kindnet --driver=docker                              |                              |                   |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:41 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |                   |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:41 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:41 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:41 GMT | 07 Nov 22 18:42 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-183055 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:42 GMT | 07 Nov 22 18:42 GMT |
	|         | default-k8s-diff-port-183055                               |                              |                   |         |                     |                     |
	| start   | -p cilium-182331 --memory=2048                             | cilium-182331                | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:42 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium                             |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-184042                 | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:43 GMT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |                   |         |                     |                     |
	| stop    | -p newest-cni-184042                                       | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:43 GMT |
	|         | --alsologtostderr -v=3                                     |                              |                   |         |                     |                     |
	| ssh     | -p auto-182327 pgrep -a                                    | auto-182327                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:43 GMT |
	|         | kubelet                                                    |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-184042                      | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:43 GMT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |                   |         |                     |                     |
	| start   | -p newest-cni-184042 --memory=2200 --alsologtostderr       | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:43 GMT | 07 Nov 22 18:44 GMT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |                   |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.25.3               |                              |                   |         |                     |                     |
	| ssh     | -p kindnet-182329 pgrep -a                                 | kindnet-182329               | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT | 07 Nov 22 18:44 GMT |
	|         | kubelet                                                    |                              |                   |         |                     |                     |
	| delete  | -p auto-182327                                             | auto-182327                  | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT | 07 Nov 22 18:44 GMT |
	| start   | -p calico-182331 --memory=2048                             | calico-182331                | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=calico                             |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| ssh     | -p newest-cni-184042 sudo                                  | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT | 07 Nov 22 18:44 GMT |
	|         | crictl images -o json                                      |                              |                   |         |                     |                     |
	| pause   | -p newest-cni-184042                                       | newest-cni-184042            | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p kindnet-182329                                          | kindnet-182329               | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT | 07 Nov 22 18:44 GMT |
	| start   | -p false-182329 --memory=2048                              | false-182329                 | minikube2\jenkins | v1.28.0 | 07 Nov 22 18:44 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=false                              |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 18:44:55
	Running on machine: minikube2
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 18:44:55.649349    4696 out.go:296] Setting OutFile to fd 1664 ...
	I1107 18:44:55.723716    4696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:44:55.723716    4696 out.go:309] Setting ErrFile to fd 1908...
	I1107 18:44:55.723716    4696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:44:55.748913    4696 out.go:303] Setting JSON to false
	I1107 18:44:55.751703    4696 start.go:116] hostinfo: {"hostname":"minikube2","uptime":11333,"bootTime":1667835362,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 18:44:55.751703    4696 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 18:44:55.755831    4696 out.go:177] * [false-182329] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 18:44:55.759015    4696 notify.go:220] Checking for updates...
	I1107 18:44:55.762300    4696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 18:44:55.764932    4696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 18:44:55.767099    4696 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 18:44:55.769677    4696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 18:44:52.876716    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:54.955720    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:44:55.772344    4696 config.go:180] Loaded profile config "calico-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:55.773037    4696 config.go:180] Loaded profile config "cilium-182331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:55.773037    4696 config.go:180] Loaded profile config "newest-cni-184042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:44:55.773598    4696 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 18:44:56.146538    4696 docker.go:137] docker version: linux-20.10.20
	I1107 18:44:56.157517    4696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:44:56.830547    4696 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:62 SystemTime:2022-11-07 18:44:56.3188829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:44:56.892623    4696 out.go:177] * Using the docker driver based on user configuration
	I1107 18:44:56.905199    4696 start.go:282] selected driver: docker
	I1107 18:44:56.905199    4696 start.go:808] validating driver "docker" against <nil>
	I1107 18:44:56.905307    4696 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 18:44:56.975424    4696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 18:44:57.630870    4696 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:62 SystemTime:2022-11-07 18:44:57.1442385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 18:44:57.630870    4696 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 18:44:57.631625    4696 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 18:44:57.641220    4696 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 18:44:57.645058    4696 cni.go:95] Creating CNI manager for "false"
	I1107 18:44:57.645058    4696 start_flags.go:317] config:
	{Name:false-182329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:false-182329 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: F
eatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 18:44:57.652122    4696 out.go:177] * Starting control plane node false-182329 in cluster false-182329
	I1107 18:44:57.656972    4696 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 18:44:57.661644    4696 out.go:177] * Pulling base image ...
	I1107 18:44:57.667151    4696 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 18:44:57.667151    4696 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 18:44:57.667151    4696 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 18:44:57.667151    4696 cache.go:57] Caching tarball of preloaded images
	I1107 18:44:57.667917    4696 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 18:44:57.668590    4696 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 18:44:57.668959    4696 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-182329\config.json ...
	I1107 18:44:57.669351    4696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-182329\config.json: {Name:mk5d82bf123030a1a35da230483effe93df9084a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 18:44:57.916631    4696 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 18:44:57.916821    4696 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 18:44:57.916821    4696 cache.go:208] Successfully downloaded all kic artifacts
	I1107 18:44:57.916975    4696 start.go:364] acquiring machines lock for false-182329: {Name:mk52d55f5939af20cbc1a73a25b90518845f5730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 18:44:57.917233    4696 start.go:368] acquired machines lock for "false-182329" in 177.8µs
	I1107 18:44:57.917477    4696 start.go:93] Provisioning new machine with config: &{Name:false-182329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:false-182329 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 18:44:57.917624    4696 start.go:125] createHost starting for "" (driver="docker")
	I1107 18:44:57.924498    4696 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 18:44:57.924498    4696 start.go:159] libmachine.API.Create for "false-182329" (driver="docker")
	I1107 18:44:57.924498    4696 client.go:168] LocalClient.Create starting
	I1107 18:44:57.925084    4696 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1107 18:44:57.925084    4696 main.go:134] libmachine: Decoding PEM data...
	I1107 18:44:57.925611    4696 main.go:134] libmachine: Parsing certificate...
	I1107 18:44:57.925770    4696 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1107 18:44:57.925770    4696 main.go:134] libmachine: Decoding PEM data...
	I1107 18:44:57.925770    4696 main.go:134] libmachine: Parsing certificate...
	I1107 18:44:57.936619    4696 cli_runner.go:164] Run: docker network inspect false-182329 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 18:44:58.120979    4696 cli_runner.go:211] docker network inspect false-182329 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 18:44:58.130419    4696 network_create.go:272] running [docker network inspect false-182329] to gather additional debugging logs...
	I1107 18:44:58.130419    4696 cli_runner.go:164] Run: docker network inspect false-182329
	W1107 18:44:58.337920    4696 cli_runner.go:211] docker network inspect false-182329 returned with exit code 1
	I1107 18:44:58.338167    4696 network_create.go:275] error running [docker network inspect false-182329]: docker network inspect false-182329: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-182329
	I1107 18:44:58.338167    4696 network_create.go:277] output of [docker network inspect false-182329]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-182329
	
	** /stderr **
	I1107 18:44:58.352214    4696 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 18:44:58.568692    4696 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006ccea0] misses:0}
	I1107 18:44:58.568692    4696 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:58.568692    4696 network_create.go:115] attempt to create docker network false-182329 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 18:44:58.577852    4696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329
	W1107 18:44:58.839209    4696 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329 returned with exit code 1
	W1107 18:44:58.839493    4696 network_create.go:107] failed to create docker network false-182329 192.168.49.0/24, will retry: subnet is taken
	I1107 18:44:58.862453    4696 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ccea0] amended:false}} dirty:map[] misses:0}
	I1107 18:44:58.862453    4696 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:58.882578    4696 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ccea0] amended:true}} dirty:map[192.168.49.0:0xc0006ccea0 192.168.58.0:0xc0006ccfc8] misses:0}
	I1107 18:44:58.883204    4696 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:58.883204    4696 network_create.go:115] attempt to create docker network false-182329 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 18:44:58.891527    4696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329
	W1107 18:44:59.134674    4696 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329 returned with exit code 1
	W1107 18:44:59.134674    4696 network_create.go:107] failed to create docker network false-182329 192.168.58.0/24, will retry: subnet is taken
	I1107 18:44:59.159035    4696 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ccea0] amended:true}} dirty:map[192.168.49.0:0xc0006ccea0 192.168.58.0:0xc0006ccfc8] misses:1}
	I1107 18:44:59.159035    4696 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:59.179553    4696 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ccea0] amended:true}} dirty:map[192.168.49.0:0xc0006ccea0 192.168.58.0:0xc0006ccfc8 192.168.67.0:0xc0001262c8] misses:1}
	I1107 18:44:59.179614    4696 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:59.179614    4696 network_create.go:115] attempt to create docker network false-182329 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 18:44:59.189350    4696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329
	W1107 18:44:59.383746    4696 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329 returned with exit code 1
	W1107 18:44:59.383746    4696 network_create.go:107] failed to create docker network false-182329 192.168.67.0/24, will retry: subnet is taken
	I1107 18:44:59.401759    4696 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ccea0] amended:true}} dirty:map[192.168.49.0:0xc0006ccea0 192.168.58.0:0xc0006ccfc8 192.168.67.0:0xc0001262c8] misses:2}
	I1107 18:44:59.402806    4696 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:59.420766    4696 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ccea0] amended:true}} dirty:map[192.168.49.0:0xc0006ccea0 192.168.58.0:0xc0006ccfc8 192.168.67.0:0xc0001262c8 192.168.76.0:0xc0006cd060] misses:2}
	I1107 18:44:59.420766    4696 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:59.420766    4696 network_create.go:115] attempt to create docker network false-182329 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1107 18:44:59.430044    4696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329
	W1107 18:44:59.616913    4696 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329 returned with exit code 1
	W1107 18:44:59.616913    4696 network_create.go:107] failed to create docker network false-182329 192.168.76.0/24, will retry: subnet is taken
	I1107 18:44:59.636452    4696 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ccea0] amended:true}} dirty:map[192.168.49.0:0xc0006ccea0 192.168.58.0:0xc0006ccfc8 192.168.67.0:0xc0001262c8 192.168.76.0:0xc0006cd060] misses:3}
	I1107 18:44:59.636452    4696 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:59.658516    4696 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ccea0] amended:true}} dirty:map[192.168.49.0:0xc0006ccea0 192.168.58.0:0xc0006ccfc8 192.168.67.0:0xc0001262c8 192.168.76.0:0xc0006cd060 192.168.85.0:0xc0005187e0] misses:3}
	I1107 18:44:59.658516    4696 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 18:44:59.658516    4696 network_create.go:115] attempt to create docker network false-182329 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1107 18:44:59.666226    4696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329
	W1107 18:44:59.866890    4696 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-182329 false-182329 returned with exit code 1
	W1107 18:44:59.866968    4696 network_create.go:107] failed to create docker network false-182329 192.168.85.0/24, will retry: subnet is taken
	W1107 18:44:59.867290    4696 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create docker network false-182329: subnet is taken
	I1107 18:44:59.884388    4696 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 18:45:00.108172    4696 cli_runner.go:164] Run: docker volume create false-182329 --label name.minikube.sigs.k8s.io=false-182329 --label created_by.minikube.sigs.k8s.io=true
	I1107 18:44:57.451159    9840 pod_ready.go:102] pod "cilium-k65t2" in "kube-system" namespace has status "Ready":"False"
	I1107 18:45:02.900240    4696 cli_runner.go:217] Completed: docker volume create false-182329 --label name.minikube.sigs.k8s.io=false-182329 --label created_by.minikube.sigs.k8s.io=true: (2.7920371s)
	I1107 18:45:02.900240    4696 oci.go:103] Successfully created a docker volume false-182329
	I1107 18:45:02.907261    4696 cli_runner.go:164] Run: docker run --rm --name false-182329-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-182329 --entrypoint /usr/bin/test -v false-182329:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 18:43:42 UTC, end at Mon 2022-11-07 18:45:07 UTC. --
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.731013400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.790205400Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812441600Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812562400Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812580700Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812588900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812597100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.812607500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Nov 07 18:43:53 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:53.813158700Z" level=info msg="Loading containers: start."
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.473199300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.638661000Z" level=info msg="Loading containers: done."
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.713147100Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.713344100Z" level=info msg="Daemon has completed initialization"
	Nov 07 18:43:54 newest-cni-184042 systemd[1]: Started Docker Application Container Engine.
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.817658200Z" level=info msg="API listen on [::]:2376"
	Nov 07 18:43:54 newest-cni-184042 dockerd[643]: time="2022-11-07T18:43:54.830677400Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 18:44:25 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:25.679479800Z" level=info msg="ignoring event" container=a811db2ecb49ef3dbf38ba909368c601f14a606b39ef634b1dabca46781c1a48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:26 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:26.130998300Z" level=info msg="ignoring event" container=247d038ebc9141d0c5d362658f83f64ce5d74801137725fd5a213522c6649dfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:31 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:31.440399200Z" level=info msg="ignoring event" container=a99d4278dad9a0068d6ada01c2c368b6a278c391e972b81eeb55695d506ddc30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:31 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:31.832861400Z" level=info msg="ignoring event" container=32cea441251ea7ba93ff7ae2a9ce4aa58c98486653ce4468835b1a8d8b907124 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:36 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:36.455550300Z" level=info msg="ignoring event" container=dcf345fa484f9d8a6dfbe1a7d2c218f8d01e6f0c56ae096af825eb6b6399fe14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:39 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:39.941225000Z" level=info msg="ignoring event" container=6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:41 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:41.536117800Z" level=info msg="ignoring event" container=67d3fa8b323c091c00757fb0088de742cfa3d0279dacf0e2b285f271956d6141 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:43 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:43.461288200Z" level=info msg="ignoring event" container=0f540d824e8b657c5a7a67339c195c3b4ed3576d869ecf9a5320698ae47c56d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 18:44:43 newest-cni-184042 dockerd[643]: time="2022-11-07T18:44:43.696341900Z" level=info msg="ignoring event" container=639167d71924bbe24e15a2f1de3e7b3df2943e8c4008ad81a61b1d09fc0a445b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	9afff48667c8b       6e38f40d628db       45 seconds ago       Running             storage-provisioner       1                   e10c938414f07
	64661acfd1b30       beaaf00edd38a       45 seconds ago       Running             kube-proxy                1                   48d53909123bb
	5fef7f25a8223       6d23ec0e8b87e       About a minute ago   Running             kube-scheduler            1                   89bd511a700e6
	4f8569486774d       6039992312758       About a minute ago   Running             kube-controller-manager   1                   5612c8b4c22fa
	05c970ce0ed6d       a8a176a5d5d69       About a minute ago   Running             etcd                      1                   0d4b0b6b07752
	190d28bb055e9       0346dbd74bcb9       About a minute ago   Running             kube-apiserver            1                   73597d02e34dd
	e948d3d88eef4       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   c1b4994a9895a
	4293c85e970a8       beaaf00edd38a       About a minute ago   Exited              kube-proxy                0                   1bcc8841821a7
	a2f4e92311dcc       a8a176a5d5d69       2 minutes ago        Exited              etcd                      0                   1249b47787cf9
	fadafe8ae9b31       0346dbd74bcb9       2 minutes ago        Exited              kube-apiserver            0                   6a33f03e1230f
	36b2a34cab3d5       6039992312758       2 minutes ago        Exited              kube-controller-manager   0                   bc1849d07d22f
	c6d275ecae461       6d23ec0e8b87e       2 minutes ago        Exited              kube-scheduler            0                   61950589a6ff0
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Nov 7 18:19] WSL2: Performing memory compaction.
	[Nov 7 18:20] process 'docker/tmp/qemu-check426843351/check' started with executable stack
	[Nov 7 18:21] WSL2: Performing memory compaction.
	[Nov 7 18:23] WSL2: Performing memory compaction.
	[Nov 7 18:24] WSL2: Performing memory compaction.
	[Nov 7 18:27] WSL2: Performing memory compaction.
	[Nov 7 18:28] hrtimer: interrupt took 314000 ns
	[Nov 7 18:29] WSL2: Performing memory compaction.
	[Nov 7 18:40] WSL2: Performing memory compaction.
	[Nov 7 18:41] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [05c970ce0ed6] <==
	* {"level":"info","ts":"2022-11-07T18:44:33.824Z","caller":"traceutil/trace.go:171","msg":"trace[279524093] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:510; }","duration":"282.7199ms","start":"2022-11-07T18:44:33.541Z","end":"2022-11-07T18:44:33.824Z","steps":["trace[279524093] 'agreement among raft nodes before linearized reading'  (duration: 255.452ms)","trace[279524093] 'range keys from in-memory index tree'  (duration: 27.0664ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:44:36.363Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"416.8433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:44:36.364Z","caller":"traceutil/trace.go:171","msg":"trace[1645096699] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:527; }","duration":"417.1038ms","start":"2022-11-07T18:44:35.947Z","end":"2022-11-07T18:44:36.364Z","steps":["trace[1645096699] 'range keys from in-memory index tree'  (duration: 416.5294ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:36.364Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-11-07T18:44:35.947Z","time spent":"417.1923ms","remote":"127.0.0.1:44270","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-11-07T18:44:38.859Z","caller":"traceutil/trace.go:171","msg":"trace[1914344396] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"116.599ms","start":"2022-11-07T18:44:38.742Z","end":"2022-11-07T18:44:38.859Z","steps":["trace[1914344396] 'process raft request'  (duration: 100.5005ms)","trace[1914344396] 'compare'  (duration: 14.5202ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:44:39.032Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"163.3181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/newest-cni-184042\" ","response":"range_response_count:1 size:691"}
	{"level":"info","ts":"2022-11-07T18:44:39.032Z","caller":"traceutil/trace.go:171","msg":"trace[752696002] range","detail":"{range_begin:/registry/csinodes/newest-cni-184042; range_end:; response_count:1; response_revision:549; }","duration":"163.4409ms","start":"2022-11-07T18:44:38.869Z","end":"2022-11-07T18:44:39.032Z","steps":["trace[752696002] 'agreement among raft nodes before linearized reading'  (duration: 163.214ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.033Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"164.8942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/newest-cni-184042\" ","response":"range_response_count:1 size:571"}
	{"level":"warn","ts":"2022-11-07T18:44:39.033Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"165.8145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/metrics-server-5c8fd5cf8\" ","response":"range_response_count:1 size:3190"}
	{"level":"info","ts":"2022-11-07T18:44:39.033Z","caller":"traceutil/trace.go:171","msg":"trace[1011042921] range","detail":"{range_begin:/registry/leases/kube-node-lease/newest-cni-184042; range_end:; response_count:1; response_revision:549; }","duration":"164.968ms","start":"2022-11-07T18:44:38.868Z","end":"2022-11-07T18:44:39.033Z","steps":["trace[1011042921] 'agreement among raft nodes before linearized reading'  (duration: 164.2458ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:44:39.033Z","caller":"traceutil/trace.go:171","msg":"trace[881045888] range","detail":"{range_begin:/registry/replicasets/kube-system/metrics-server-5c8fd5cf8; range_end:; response_count:1; response_revision:549; }","duration":"165.8697ms","start":"2022-11-07T18:44:38.867Z","end":"2022-11-07T18:44:39.033Z","steps":["trace[881045888] 'agreement among raft nodes before linearized reading'  (duration: 165.1784ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.033Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"164.9754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2895"}
	{"level":"info","ts":"2022-11-07T18:44:39.033Z","caller":"traceutil/trace.go:171","msg":"trace[1123872363] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:549; }","duration":"165.0576ms","start":"2022-11-07T18:44:38.868Z","end":"2022-11-07T18:44:39.033Z","steps":["trace[1123872363] 'agreement among raft nodes before linearized reading'  (duration: 164.0015ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.033Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"166.3671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-565d847f94\" ","response":"range_response_count:1 size:3847"}
	{"level":"info","ts":"2022-11-07T18:44:39.033Z","caller":"traceutil/trace.go:171","msg":"trace[171905595] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-565d847f94; range_end:; response_count:1; response_revision:549; }","duration":"166.417ms","start":"2022-11-07T18:44:38.867Z","end":"2022-11-07T18:44:39.033Z","steps":["trace[171905595] 'agreement among raft nodes before linearized reading'  (duration: 165.078ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.132Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"184.3704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:44:39.132Z","caller":"traceutil/trace.go:171","msg":"trace[1950611678] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:551; }","duration":"184.8388ms","start":"2022-11-07T18:44:38.947Z","end":"2022-11-07T18:44:39.132Z","steps":["trace[1950611678] 'agreement among raft nodes before linearized reading'  (duration: 184.3153ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.151Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.1605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-184042\" ","response":"range_response_count:1 size:4576"}
	{"level":"info","ts":"2022-11-07T18:44:39.151Z","caller":"traceutil/trace.go:171","msg":"trace[47706546] range","detail":"{range_begin:/registry/minions/newest-cni-184042; range_end:; response_count:1; response_revision:557; }","duration":"104.3558ms","start":"2022-11-07T18:44:39.047Z","end":"2022-11-07T18:44:39.151Z","steps":["trace[47706546] 'agreement among raft nodes before linearized reading'  (duration: 104.1007ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.151Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.2877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-184042\" ","response":"range_response_count:1 size:4576"}
	{"level":"warn","ts":"2022-11-07T18:44:39.151Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.2945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-2zdmh\" ","response":"range_response_count:1 size:2792"}
	{"level":"info","ts":"2022-11-07T18:44:39.151Z","caller":"traceutil/trace.go:171","msg":"trace[1859811716] range","detail":"{range_begin:/registry/minions/newest-cni-184042; range_end:; response_count:1; response_revision:557; }","duration":"104.6207ms","start":"2022-11-07T18:44:39.047Z","end":"2022-11-07T18:44:39.151Z","steps":["trace[1859811716] 'agreement among raft nodes before linearized reading'  (duration: 104.2272ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:44:39.152Z","caller":"traceutil/trace.go:171","msg":"trace[1110657043] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-2zdmh; range_end:; response_count:1; response_revision:557; }","duration":"104.3762ms","start":"2022-11-07T18:44:39.047Z","end":"2022-11-07T18:44:39.151Z","steps":["trace[1110657043] 'agreement among raft nodes before linearized reading'  (duration: 104.2888ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:44:39.152Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.0275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5172"}
	{"level":"info","ts":"2022-11-07T18:44:39.152Z","caller":"traceutil/trace.go:171","msg":"trace[324921335] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:557; }","duration":"103.1992ms","start":"2022-11-07T18:44:39.049Z","end":"2022-11-07T18:44:39.152Z","steps":["trace[324921335] 'agreement among raft nodes before linearized reading'  (duration: 102.1711ms)"],"step_count":1}
	
	* 
	* ==> etcd [a2f4e92311dc] <==
	* {"level":"info","ts":"2022-11-07T18:43:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[663198329] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-184042; range_end:; response_count:1; response_revision:312; }","duration":"108.0056ms","start":"2022-11-07T18:43:19.545Z","end":"2022-11-07T18:43:19.653Z","steps":["trace[663198329] 'agreement among raft nodes before linearized reading'  (duration: 107.7658ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:43:19.653Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.2038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-newest-cni-184042\" ","response":"range_response_count:1 size:6894"}
	{"level":"info","ts":"2022-11-07T18:43:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[84253984] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-newest-cni-184042; range_end:; response_count:1; response_revision:312; }","duration":"108.2457ms","start":"2022-11-07T18:43:19.545Z","end":"2022-11-07T18:43:19.653Z","steps":["trace[84253984] 'agreement among raft nodes before linearized reading'  (duration: 108.1706ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:43:19.653Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.0138ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-184042\" ","response":"range_response_count:1 size:4273"}
	{"level":"info","ts":"2022-11-07T18:43:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[1378156960] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-184042; range_end:; response_count:1; response_revision:312; }","duration":"108.4403ms","start":"2022-11-07T18:43:19.545Z","end":"2022-11-07T18:43:19.653Z","steps":["trace[1378156960] 'agreement among raft nodes before linearized reading'  (duration: 107.9739ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:43:19.833Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.7424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-184042\" ","response":"range_response_count:1 size:4576"}
	{"level":"info","ts":"2022-11-07T18:43:19.833Z","caller":"traceutil/trace.go:171","msg":"trace[1148470588] range","detail":"{range_begin:/registry/minions/newest-cni-184042; range_end:; response_count:1; response_revision:317; }","duration":"101.9885ms","start":"2022-11-07T18:43:19.731Z","end":"2022-11-07T18:43:19.833Z","steps":["trace[1148470588] 'agreement among raft nodes before linearized reading'  (duration: 98.1015ms)"],"step_count":1}
	{"level":"warn","ts":"2022-11-07T18:43:26.691Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.5139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-565d847f94-v24jl\" ","response":"range_response_count:1 size:4516"}
	{"level":"info","ts":"2022-11-07T18:43:26.691Z","caller":"traceutil/trace.go:171","msg":"trace[661402472] range","detail":"{range_begin:/registry/pods/kube-system/coredns-565d847f94-v24jl; range_end:; response_count:1; response_revision:379; }","duration":"110.6711ms","start":"2022-11-07T18:43:26.580Z","end":"2022-11-07T18:43:26.691Z","steps":["trace[661402472] 'agreement among raft nodes before linearized reading'  (duration: 77.6481ms)","trace[661402472] 'range keys from in-memory index tree'  (duration: 32.5056ms)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T18:43:29.345Z","caller":"traceutil/trace.go:171","msg":"trace[472503022] linearizableReadLoop","detail":"{readStateIndex:409; appliedIndex:407; }","duration":"114.9573ms","start":"2022-11-07T18:43:29.230Z","end":"2022-11-07T18:43:29.345Z","steps":["trace[472503022] 'read index received'  (duration: 99.7149ms)","trace[472503022] 'applied index is now lower than readState.Index'  (duration: 15.2393ms)"],"step_count":2}
	{"level":"info","ts":"2022-11-07T18:43:29.345Z","caller":"traceutil/trace.go:171","msg":"trace[375325545] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"115.182ms","start":"2022-11-07T18:43:29.230Z","end":"2022-11-07T18:43:29.345Z","steps":["trace[375325545] 'process raft request'  (duration: 114.8138ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:43:29.345Z","caller":"traceutil/trace.go:171","msg":"trace[510430390] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"157.1741ms","start":"2022-11-07T18:43:29.188Z","end":"2022-11-07T18:43:29.345Z","steps":["trace[510430390] 'process raft request'  (duration: 141.803ms)","trace[510430390] 'compare'  (duration: 14.7726ms)"],"step_count":2}
	{"level":"warn","ts":"2022-11-07T18:43:29.346Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.7666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-11-07T18:43:29.346Z","caller":"traceutil/trace.go:171","msg":"trace[1511392306] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:393; }","duration":"115.8568ms","start":"2022-11-07T18:43:29.230Z","end":"2022-11-07T18:43:29.346Z","steps":["trace[1511392306] 'agreement among raft nodes before linearized reading'  (duration: 115.7322ms)"],"step_count":1}
	{"level":"info","ts":"2022-11-07T18:43:32.441Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-07T18:43:32.441Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-184042","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/11/07 18:43:32 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"warn","ts":"2022-11-07T18:43:32.548Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.7551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2022-11-07T18:43:32.548Z","caller":"traceutil/trace.go:171","msg":"trace[1419021998] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; }","duration":"111.9758ms","start":"2022-11-07T18:43:32.436Z","end":"2022-11-07T18:43:32.548Z","steps":["trace[1419021998] 'agreement among raft nodes before linearized reading'  (duration: 93.5661ms)"],"step_count":1}
	WARNING: 2022/11/07 18:43:32 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2022/11/07 18:43:32 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-07T18:43:32.643Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-11-07T18:43:32.742Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-11-07T18:43:32.744Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-11-07T18:43:32.744Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-184042","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:45:19 up  2:00,  0 users,  load average: 11.34, 10.82, 8.20
	Linux newest-cni-184042 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [190d28bb055e] <==
	* Trace[1709041045]: [1.8315127s] [1.8315127s] END
	I1107 18:44:20.965255       1 trace.go:205] Trace[767738611]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/metrics-server/token,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:3a3087ee-1e8a-449b-93d0-eb03badeea4f,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:44:19.150) (total time: 1815ms):
	Trace[767738611]: ---"Write to database call finished" len:162,err:<nil> 1814ms (18:44:20.965)
	Trace[767738611]: [1.8151154s] [1.8151154s] END
	I1107 18:44:20.965705       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 18:44:20.966096       1 trace.go:205] Trace[329695775]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:88dc4ee4-04a5-4477-b46e-b39faee654b6,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:44:19.150) (total time: 1815ms):
	Trace[329695775]: ---"Write to database call finished" len:151,err:<nil> 1815ms (18:44:20.965)
	Trace[329695775]: [1.8158174s] [1.8158174s] END
	I1107 18:44:20.967632       1 trace.go:205] Trace[366400045]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:e3614e84-b0d0-4baa-9c1e-63c70a6f3353,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:44:19.150) (total time: 1817ms):
	Trace[366400045]: ---"Write to database call finished" len:148,err:<nil> 1816ms (18:44:20.967)
	Trace[366400045]: [1.8175062s] [1.8175062s] END
	I1107 18:44:20.971021       1 trace.go:205] Trace[1174335176]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:f699a22e-77f6-4f47-a84e-fb9c68c8894e,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (07-Nov-2022 18:44:19.150) (total time: 1820ms):
	Trace[1174335176]: ---"Write to database call finished" len:156,err:<nil> 1820ms (18:44:20.970)
	Trace[1174335176]: [1.8206758s] [1.8206758s] END
	I1107 18:44:25.467260       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1107 18:44:25.683034       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1107 18:44:26.246948       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1107 18:44:26.734165       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 18:44:26.840670       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 18:44:33.457152       1 controller.go:616] quota admission added evaluator for: namespaces
	I1107 18:44:34.472415       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.145.146]
	I1107 18:44:34.563939       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.104.63.82]
	I1107 18:44:38.453528       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I1107 18:44:38.635070       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 18:44:38.739137       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [fadafe8ae9b3] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:43:33.546606       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:43:33.546737       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 18:43:33.546989       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [36b2a34cab3d] <==
	* I1107 18:43:18.936449       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-newest-cni-184042" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1107 18:43:18.936807       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-newest-cni-184042" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1107 18:43:18.945908       1 range_allocator.go:367] Set node newest-cni-184042 PodCIDR to [192.168.0.0/24]
	I1107 18:43:18.972553       1 shared_informer.go:262] Caches are synced for disruption
	I1107 18:43:19.030156       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1107 18:43:19.044071       1 shared_informer.go:262] Caches are synced for endpoint
	I1107 18:43:19.044710       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1107 18:43:19.130259       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1107 18:43:19.130414       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:43:19.130624       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:43:19.237908       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1107 18:43:19.530742       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:43:19.530870       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 18:43:19.539389       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:43:19.840215       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I1107 18:43:19.934300       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ghl24"
	I1107 18:43:20.045224       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-v24jl"
	I1107 18:43:20.057094       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-tss8d"
	I1107 18:43:20.351235       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I1107 18:43:20.444202       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-v24jl"
	I1107 18:43:23.866530       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1107 18:43:29.480986       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I1107 18:43:29.541594       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c8fd5cf8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E1107 18:43:29.560393       1 replica_set.go:550] sync "kube-system/metrics-server-5c8fd5cf8" failed with pods "metrics-server-5c8fd5cf8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I1107 18:43:29.644191       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-8zhxb"
	
	* 
	* ==> kube-controller-manager [4f8569486774] <==
	* I1107 18:44:38.351832       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W1107 18:44:38.351915       1 node_lifecycle_controller.go:1058] Missing timestamp for Node newest-cni-184042. Assuming now as a timestamp.
	I1107 18:44:38.351976       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1107 18:44:38.432515       1 event.go:294] "Event occurred" object="newest-cni-184042" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-184042 event: Registered Node newest-cni-184042 in Controller"
	I1107 18:44:38.432864       1 shared_informer.go:262] Caches are synced for attach detach
	I1107 18:44:38.432900       1 shared_informer.go:262] Caches are synced for endpoint
	I1107 18:44:38.433707       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1107 18:44:38.433807       1 taint_manager.go:209] "Sending events to api server"
	I1107 18:44:38.433824       1 shared_informer.go:262] Caches are synced for namespace
	I1107 18:44:38.434338       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1107 18:44:38.435691       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	E1107 18:44:38.440162       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1107 18:44:38.451674       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1107 18:44:38.532460       1 shared_informer.go:262] Caches are synced for disruption
	I1107 18:44:38.532625       1 shared_informer.go:262] Caches are synced for stateful set
	I1107 18:44:38.532838       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:44:38.535837       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1107 18:44:38.539236       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 18:44:38.549025       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I1107 18:44:38.549185       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-57bbdc5f89 to 1"
	I1107 18:44:38.839645       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:44:38.860580       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 18:44:38.860618       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 18:44:38.940077       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-2zdmh"
	I1107 18:44:39.041268       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-57bbdc5f89" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-57bbdc5f89-6b6wx"
	
	* 
	* ==> kube-proxy [4293c85e970a] <==
	* I1107 18:43:23.929622       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1107 18:43:23.945374       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 18:43:23.949405       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 18:43:23.953354       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 18:43:23.957371       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1107 18:43:24.049075       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I1107 18:43:24.049227       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I1107 18:43:24.049274       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 18:43:24.235285       1 server_others.go:206] "Using iptables Proxier"
	I1107 18:43:24.235361       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 18:43:24.235383       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 18:43:24.235412       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 18:43:24.235447       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:43:24.235798       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:43:24.236116       1 server.go:661] "Version info" version="v1.25.3"
	I1107 18:43:24.236135       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 18:43:24.243737       1 config.go:317] "Starting service config controller"
	I1107 18:43:24.243765       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 18:43:24.243976       1 config.go:226] "Starting endpoint slice config controller"
	I1107 18:43:24.243993       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 18:43:24.244473       1 config.go:444] "Starting node config controller"
	I1107 18:43:24.244490       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 18:43:24.345807       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 18:43:24.345972       1 shared_informer.go:262] Caches are synced for service config
	I1107 18:43:24.346427       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [64661acfd1b3] <==
	* I1107 18:44:25.561018       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1107 18:44:25.568385       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1107 18:44:25.633319       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1107 18:44:25.637594       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1107 18:44:25.640959       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1107 18:44:25.771511       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I1107 18:44:25.771684       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I1107 18:44:25.771728       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 18:44:26.037817       1 server_others.go:206] "Using iptables Proxier"
	I1107 18:44:26.037954       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 18:44:26.037977       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 18:44:26.038006       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 18:44:26.038050       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:44:26.041120       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 18:44:26.042873       1 server.go:661] "Version info" version="v1.25.3"
	I1107 18:44:26.042897       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 18:44:26.046639       1 config.go:444] "Starting node config controller"
	I1107 18:44:26.046680       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 18:44:26.046738       1 config.go:317] "Starting service config controller"
	I1107 18:44:26.046749       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 18:44:26.046786       1 config.go:226] "Starting endpoint slice config controller"
	I1107 18:44:26.047068       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 18:44:26.148112       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 18:44:26.148150       1 shared_informer.go:262] Caches are synced for node config
	I1107 18:44:26.148255       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [5fef7f25a822] <==
	* E1107 18:44:18.545294       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 18:44:18.545323       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 18:44:18.545180       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 18:44:18.545461       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 18:44:18.545336       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 18:44:18.545588       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1107 18:44:18.545762       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 18:44:18.546052       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1107 18:44:18.545525       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 18:44:18.546255       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1107 18:44:18.546394       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 18:44:18.546432       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 18:44:18.547017       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E1107 18:44:18.547065       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W1107 18:44:18.547197       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 18:44:18.547226       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 18:44:18.630730       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 18:44:18.633034       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 18:44:18.632459       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1107 18:44:18.633893       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1107 18:44:18.632445       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1107 18:44:18.633946       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W1107 18:44:18.632609       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1107 18:44:18.633984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	I1107 18:44:19.740608       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c6d275ecae46] <==
	* W1107 18:43:03.583043       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 18:43:03.583120       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1107 18:43:03.645261       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 18:43:03.645388       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1107 18:43:03.763449       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 18:43:03.763576       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 18:43:03.764184       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1107 18:43:03.764340       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 18:43:03.788137       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 18:43:03.788278       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1107 18:43:03.859524       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 18:43:03.859684       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 18:43:03.886661       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 18:43:03.886785       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 18:43:03.936713       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 18:43:03.937019       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 18:43:03.937029       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 18:43:03.937056       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 18:43:03.999746       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 18:43:03.999870       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1107 18:43:05.852204       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 18:43:32.440091       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1107 18:43:32.440299       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1107 18:43:32.440394       1 run.go:74] "command failed" err="finished without leader elect"
	E1107 18:43:32.440460       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 18:43:42 UTC, end at Mon 2022-11-07 18:45:20 UTC. --
	Nov 07 18:44:39 newest-cni-184042 kubelet[1218]: I1107 18:44:39.235064    1218 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c038d127-fbe8-4e1b-9129-582d53346cf1-tmp-volume\") pod \"dashboard-metrics-scraper-7b94984548-2zdmh\" (UID: \"c038d127-fbe8-4e1b-9129-582d53346cf1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548-2zdmh"
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: E1107 18:44:40.365802    1218 remote_runtime.go:233] "RunPodSandbox from runtime service failed" err=<
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         rpc error: code = Unknown desc = [failed to set up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-09926e19ffeb12bf76f4adcc -m comment --comment name: "crio" id: "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-09926e19ffeb12bf76f4adcc':No such file or directory
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         Try `iptables -h' or 'iptables --help' for more information.
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         ]
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:  >
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: E1107 18:44:40.365990    1218 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=<
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         rpc error: code = Unknown desc = [failed to set up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-09926e19ffeb12bf76f4adcc -m comment --comment name: "crio" id: "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-09926e19ffeb12bf76f4adcc':No such file or directory
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         Try `iptables -h' or 'iptables --help' for more information.
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         ]
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:  > pod="kube-system/metrics-server-5c8fd5cf8-8zhxb"
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: E1107 18:44:40.366040    1218 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err=<
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         rpc error: code = Unknown desc = [failed to set up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to set up pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" network for pod "metrics-server-5c8fd5cf8-8zhxb": networkPlugin cni failed to teardown pod "metrics-server-5c8fd5cf8-8zhxb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-09926e19ffeb12bf76f4adcc -m comment --comment name: "crio" id: "6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-09926e19ffeb12bf76f4adcc':No such file or directory
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         Try `iptables -h' or 'iptables --help' for more information.
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:         ]
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]:  > pod="kube-system/metrics-server-5c8fd5cf8-8zhxb"
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: E1107 18:44:40.366215    1218 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c8fd5cf8-8zhxb_kube-system(2a3fbd51-1a7a-435b-85bc-88b33a7b6003)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c8fd5cf8-8zhxb_kube-system(2a3fbd51-1a7a-435b-85bc-88b33a7b6003)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185\\\" network for pod \\\"metrics-server-5c8fd5cf8-8zhxb\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c8fd5cf8-8zhxb_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185\\\" network for pod \\\"metrics-server-5c8fd5cf8-8zhxb\\\": networkPlugin cni failed to teardown pod \\\"metrics-serv
er-5c8fd5cf8-8zhxb_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-09926e19ffeb12bf76f4adcc -m comment --comment name: \\\"crio\\\" id: \\\"6e8a29e22675ed324f213115aeeaea64aed89b971eb77fbdcffc6de2ac8b6185\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-09926e19ffeb12bf76f4adcc':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c8fd5cf8-8zhxb" podUID=2a3fbd51-1a7a-435b-85bc-88b33a7b6003
	Nov 07 18:44:40 newest-cni-184042 kubelet[1218]: I1107 18:44:40.832023    1218 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="67d3fa8b323c091c00757fb0088de742cfa3d0279dacf0e2b285f271956d6141"
	Nov 07 18:44:42 newest-cni-184042 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Nov 07 18:44:42 newest-cni-184042 kubelet[1218]: I1107 18:44:42.325626    1218 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 07 18:44:42 newest-cni-184042 systemd[1]: kubelet.service: Succeeded.
	Nov 07 18:44:42 newest-cni-184042 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [9afff48667c8] <==
	* I1107 18:44:25.343672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	* 
	* ==> storage-provisioner [e948d3d88eef] <==
	* I1107 18:43:28.531319       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 18:43:28.567113       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 18:43:28.567330       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 18:43:28.657798       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 18:43:28.659819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-184042_8a2e9372-91c3-47a2-98b8-088b4a85e714!
	I1107 18:43:28.658141       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae84d063-8899-4080-88e9-43f9f4572c37", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-184042_8a2e9372-91c3-47a2-98b8-088b4a85e714 became leader
	I1107 18:43:28.764031       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-184042_8a2e9372-91c3-47a2-98b8-088b4a85e714!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 18:45:18.295846    9228 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-184042 -n newest-cni-184042
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-184042 -n newest-cni-184042: exit status 2 (1.6870407s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-184042" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (42.48s)
E1107 18:54:49.419075    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (353.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6070268s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5967904s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5483616s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5333904s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6393268s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.511936s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 18:53:34.649152    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5594534s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default
E1107 18:54:02.475903    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5377055s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (18.5616907s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5229234s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 18:55:40.088198    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default
E1107 18:56:12.602426    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6413665s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 18:56:19.907629    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-182329 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5384198s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (353.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (356.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.8284141s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default
E1107 18:52:21.831601    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.527673s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default
E1107 18:52:44.515207    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.516918s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5234958s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.524757s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4845241s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 18:53:55.833643    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6045068s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6176279s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.493756s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6125683s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5605008s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-182327 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5847487s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (356.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (56.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5067592s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5162473s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1107 18:57:10.642381    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5233113s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1107 18:57:21.830116    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.502305s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5398472s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5285172s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.4626811s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (56.95s)

                                                
                                    

Test pass (244/277)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.62
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.83
10 TestDownloadOnly/v1.25.3/json-events 9.5
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.67
16 TestDownloadOnly/DeleteAll 2.38
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.67
18 TestDownloadOnlyKic 35.29
19 TestBinaryMirror 4.31
20 TestOffline 176.09
22 TestAddons/Setup 473.3
26 TestAddons/parallel/MetricsServer 10.41
27 TestAddons/parallel/HelmTiller 50.61
29 TestAddons/parallel/CSI 95.33
30 TestAddons/parallel/Headlamp 28.04
31 TestAddons/parallel/CloudSpanner 7.56
33 TestAddons/serial/GCPAuth 22.37
34 TestAddons/StoppedEnableDisable 15
35 TestCertOptions 115.76
36 TestCertExpiration 329.18
37 TestDockerFlags 116.23
38 TestForceSystemdFlag 113.46
39 TestForceSystemdEnv 110.69
44 TestErrorSpam/setup 82.03
45 TestErrorSpam/start 6.14
46 TestErrorSpam/status 6.83
47 TestErrorSpam/pause 5.23
48 TestErrorSpam/unpause 5.77
49 TestErrorSpam/stop 22.18
52 TestFunctional/serial/CopySyncFile 0.02
53 TestFunctional/serial/StartWithProxy 97.38
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 53.36
56 TestFunctional/serial/KubeContext 0.19
57 TestFunctional/serial/KubectlGetPods 0.34
60 TestFunctional/serial/CacheCmd/cache/add_remote 8.04
61 TestFunctional/serial/CacheCmd/cache/add_local 4.55
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.39
63 TestFunctional/serial/CacheCmd/cache/list 0.38
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.53
65 TestFunctional/serial/CacheCmd/cache/cache_reload 6.91
66 TestFunctional/serial/CacheCmd/cache/delete 0.82
67 TestFunctional/serial/MinikubeKubectlCmd 0.65
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.27
69 TestFunctional/serial/ExtraConfig 52.37
70 TestFunctional/serial/ComponentHealth 0.27
71 TestFunctional/serial/LogsCmd 3.67
72 TestFunctional/serial/LogsFileCmd 3.9
74 TestFunctional/parallel/ConfigCmd 2.58
76 TestFunctional/parallel/DryRun 4.1
77 TestFunctional/parallel/InternationalLanguage 1.68
78 TestFunctional/parallel/StatusCmd 6.14
83 TestFunctional/parallel/AddonsCmd 1.04
84 TestFunctional/parallel/PersistentVolumeClaim 122.94
86 TestFunctional/parallel/SSHCmd 3.51
87 TestFunctional/parallel/CpCmd 6.55
88 TestFunctional/parallel/MySQL 140.13
89 TestFunctional/parallel/FileSync 1.87
90 TestFunctional/parallel/CertSync 11.61
94 TestFunctional/parallel/NodeLabels 0.37
96 TestFunctional/parallel/NonActiveRuntimeDisabled 1.86
98 TestFunctional/parallel/License 2.64
99 TestFunctional/parallel/Version/short 0.39
100 TestFunctional/parallel/Version/components 2.76
101 TestFunctional/parallel/ImageCommands/ImageListShort 1.28
102 TestFunctional/parallel/ImageCommands/ImageListTable 1.11
103 TestFunctional/parallel/ImageCommands/ImageListJson 1.12
104 TestFunctional/parallel/ImageCommands/ImageListYaml 1.1
105 TestFunctional/parallel/ImageCommands/ImageBuild 8.03
106 TestFunctional/parallel/ImageCommands/Setup 9.01
107 TestFunctional/parallel/DockerEnv/powershell 9
108 TestFunctional/parallel/ProfileCmd/profile_not_create 3.47
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 19.1
110 TestFunctional/parallel/ProfileCmd/profile_list 2.73
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.88
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.9
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.93
114 TestFunctional/parallel/ProfileCmd/profile_json_output 2.88
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.81
116 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 22.56
117 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.21
118 TestFunctional/parallel/ImageCommands/ImageRemove 3.48
119 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 8.41
120 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 12.39
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.72
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.24
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
131 TestFunctional/delete_addon-resizer_images 0.02
132 TestFunctional/delete_my-image_image 0.01
133 TestFunctional/delete_minikube_cached_images 0.01
136 TestIngressAddonLegacy/StartLegacyK8sCluster 107.22
138 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 60.44
139 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 1.53
143 TestJSONOutput/start/Command 98.58
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 2.2
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 2.07
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 13.75
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 1.85
168 TestKicCustomNetwork/create_custom_network 89.9
169 TestKicCustomNetwork/use_default_bridge_network 87.86
170 TestKicExistingNetwork 90.87
171 TestKicCustomSubnet 91.1
172 TestMainNoArgs 0.34
173 TestMinikubeProfile 182.58
176 TestMountStart/serial/StartWithMountFirst 21.8
177 TestMountStart/serial/VerifyMountFirst 1.42
178 TestMountStart/serial/StartWithMountSecond 19.49
179 TestMountStart/serial/VerifyMountSecond 1.37
180 TestMountStart/serial/DeleteFirst 4.58
181 TestMountStart/serial/VerifyMountPostDelete 1.37
182 TestMountStart/serial/Stop 2.92
183 TestMountStart/serial/RestartStopped 14.12
184 TestMountStart/serial/VerifyMountPostStop 1.44
187 TestMultiNode/serial/FreshStart2Nodes 186.62
188 TestMultiNode/serial/DeployApp2Nodes 14.48
189 TestMultiNode/serial/PingHostFrom2Pods 4.09
190 TestMultiNode/serial/AddNode 67.11
191 TestMultiNode/serial/ProfileList 1.62
192 TestMultiNode/serial/CopyFile 50.21
193 TestMultiNode/serial/StopNode 8.18
194 TestMultiNode/serial/StartAfterStop 35.28
195 TestMultiNode/serial/RestartKeepsNodes 134.05
196 TestMultiNode/serial/DeleteNode 13.58
197 TestMultiNode/serial/StopMultiNode 26.7
198 TestMultiNode/serial/RestartMultiNode 86.91
199 TestMultiNode/serial/ValidateNameConflict 88.1
203 TestPreload 294.91
204 TestScheduledStopWindows 160.84
208 TestInsufficientStorage 55.13
209 TestRunningBinaryUpgrade 281.24
211 TestKubernetesUpgrade 315.52
212 TestMissingContainerUpgrade 276.98
214 TestStoppedBinaryUpgrade/Setup 0.85
216 TestNoKubernetes/serial/StartNoK8sWithVersion 0.52
223 TestNoKubernetes/serial/StartWithK8s 138.78
224 TestStoppedBinaryUpgrade/Upgrade 299.65
225 TestNoKubernetes/serial/StartWithStopK8s 38.04
227 TestPause/serial/Start 105.55
228 TestNoKubernetes/serial/Start 27.04
229 TestNoKubernetes/serial/VerifyK8sNotRunning 1.65
230 TestNoKubernetes/serial/ProfileList 17.4
231 TestNoKubernetes/serial/Stop 3.05
232 TestNoKubernetes/serial/StartNoArgs 14.57
233 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.48
240 TestPause/serial/SecondStartNoReconfiguration 65.66
246 TestStoppedBinaryUpgrade/MinikubeLogs 3.3
247 TestPause/serial/Pause 2.97
248 TestPause/serial/VerifyStatus 1.85
249 TestPause/serial/Unpause 2.43
252 TestStartStop/group/old-k8s-version/serial/FirstStart 159.35
254 TestStartStop/group/no-preload/serial/FirstStart 168.17
256 TestStartStop/group/embed-certs/serial/FirstStart 123.71
258 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 108.59
259 TestStartStop/group/old-k8s-version/serial/DeployApp 15.14
260 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.75
261 TestStartStop/group/old-k8s-version/serial/Stop 13.66
262 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.44
263 TestStartStop/group/old-k8s-version/serial/SecondStart 452.21
264 TestStartStop/group/embed-certs/serial/DeployApp 11.09
265 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.56
266 TestStartStop/group/embed-certs/serial/Stop 13.39
267 TestStartStop/group/no-preload/serial/DeployApp 11.1
268 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.39
269 TestStartStop/group/embed-certs/serial/SecondStart 349.94
270 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.79
271 TestStartStop/group/no-preload/serial/Stop 13.78
272 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.06
273 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.3
274 TestStartStop/group/no-preload/serial/SecondStart 354.94
275 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.05
276 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.84
277 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.41
278 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 391.84
279 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 101.06
280 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 75.05
281 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 35.06
282 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 92.06
283 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.65
284 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.7
285 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.67
286 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.19
287 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 2.09
288 TestStartStop/group/old-k8s-version/serial/Pause 14.34
289 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 2.09
290 TestStartStop/group/no-preload/serial/Pause 18.95
291 TestStartStop/group/embed-certs/serial/Pause 15.14
293 TestStartStop/group/newest-cni/serial/FirstStart 164.01
294 TestNetworkPlugins/group/auto/Start 142.57
295 TestNetworkPlugins/group/kindnet/Start 163.18
296 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.58
297 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.56
298 TestStartStop/group/default-k8s-diff-port/serial/Pause 11.95
300 TestStartStop/group/newest-cni/serial/DeployApp 0
301 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.47
302 TestStartStop/group/newest-cni/serial/Stop 5.74
303 TestNetworkPlugins/group/auto/KubeletFlags 1.7
304 TestNetworkPlugins/group/auto/NetCatPod 27.91
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.46
306 TestStartStop/group/newest-cni/serial/SecondStart 59.73
307 TestNetworkPlugins/group/kindnet/ControllerPod 5.06
308 TestNetworkPlugins/group/kindnet/KubeletFlags 1.67
309 TestNetworkPlugins/group/auto/DNS 0.67
310 TestNetworkPlugins/group/auto/Localhost 0.75
311 TestNetworkPlugins/group/kindnet/NetCatPod 42.11
312 TestNetworkPlugins/group/auto/HairPin 5.71
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 2.77
318 TestNetworkPlugins/group/kindnet/DNS 0.54
319 TestNetworkPlugins/group/kindnet/Localhost 0.55
320 TestNetworkPlugins/group/kindnet/HairPin 0.63
321 TestNetworkPlugins/group/false/Start 380.37
322 TestNetworkPlugins/group/bridge/Start 357.73
323 TestNetworkPlugins/group/false/KubeletFlags 1.7
324 TestNetworkPlugins/group/false/NetCatPod 27.01
325 TestNetworkPlugins/group/bridge/KubeletFlags 1.67
326 TestNetworkPlugins/group/bridge/NetCatPod 26.99
329 TestNetworkPlugins/group/enable-default-cni/Start 106.21
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.56
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 25.79
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.54
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.49
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.52
335 TestNetworkPlugins/group/kubenet/Start 106.21
336 TestNetworkPlugins/group/kubenet/KubeletFlags 1.5
337 TestNetworkPlugins/group/kubenet/NetCatPod 25.78
338 TestNetworkPlugins/group/kubenet/DNS 0.53
339 TestNetworkPlugins/group/kubenet/Localhost 0.5
x
+
TestDownloadOnly/v1.16.0/json-events (12.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-164808 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-164808 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (12.6235055s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-164808
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-164808: exit status 85 (833.6031ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-164808 | minikube2\jenkins | v1.28.0 | 07 Nov 22 16:48 GMT |          |
	|         | -p download-only-164808        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 16:48:08
	Running on machine: minikube2
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 16:48:08.445243    6732 out.go:296] Setting OutFile to fd 604 ...
	I1107 16:48:08.514579    6732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:48:08.515595    6732 out.go:309] Setting ErrFile to fd 608...
	I1107 16:48:08.515595    6732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 16:48:08.525609    6732 root.go:311] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1107 16:48:08.536701    6732 out.go:303] Setting JSON to true
	I1107 16:48:08.540609    6732 start.go:116] hostinfo: {"hostname":"minikube2","uptime":4326,"bootTime":1667835362,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 16:48:08.540946    6732 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 16:48:08.577433    6732 out.go:97] [download-only-164808] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 16:48:08.577433    6732 notify.go:220] Checking for updates...
	W1107 16:48:08.577970    6732 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1107 16:48:08.581697    6732 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 16:48:08.584168    6732 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 16:48:08.587421    6732 out.go:169] MINIKUBE_LOCATION=15310
	I1107 16:48:08.590477    6732 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1107 16:48:08.594769    6732 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 16:48:08.595676    6732 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 16:48:08.896118    6732 docker.go:137] docker version: linux-20.10.20
	I1107 16:48:08.905513    6732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:48:09.501687    6732 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-11-07 16:48:09.0618434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:48:09.504959    6732 out.go:97] Using the docker driver based on user configuration
	I1107 16:48:09.504959    6732 start.go:282] selected driver: docker
	I1107 16:48:09.505081    6732 start.go:808] validating driver "docker" against <nil>
	I1107 16:48:09.520127    6732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:48:10.141808    6732 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-11-07 16:48:09.6694716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:48:10.142731    6732 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 16:48:10.266969    6732 start_flags.go:384] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I1107 16:48:10.267625    6732 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 16:48:10.286503    6732 out.go:169] Using Docker Desktop driver with root privileges
	I1107 16:48:10.288502    6732 cni.go:95] Creating CNI manager for ""
	I1107 16:48:10.289051    6732 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 16:48:10.289174    6732 start_flags.go:317] config:
	{Name:download-only-164808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-164808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:48:10.291971    6732 out.go:97] Starting control plane node download-only-164808 in cluster download-only-164808
	I1107 16:48:10.291971    6732 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 16:48:10.294052    6732 out.go:97] Pulling base image ...
	I1107 16:48:10.294212    6732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 16:48:10.294212    6732 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 16:48:10.333411    6732 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 16:48:10.333411    6732 cache.go:57] Caching tarball of preloaded images
	I1107 16:48:10.334198    6732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 16:48:10.336747    6732 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 16:48:10.336747    6732 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 16:48:10.394890    6732 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 16:48:10.490467    6732 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 16:48:10.490467    6732 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.36@sha256_8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456.tar
	I1107 16:48:10.490467    6732 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.36@sha256_8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456.tar
	I1107 16:48:10.490467    6732 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1107 16:48:10.491468    6732 image.go:120] Writing gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 16:48:14.825767    6732 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 16:48:14.826684    6732 preload.go:256] verifying checksum of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 16:48:15.906062    6732 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1107 16:48:15.906744    6732 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-164808\config.json ...
	I1107 16:48:15.907103    6732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-164808\config.json: {Name:mkf290482eacabc6cb0a0354390017cce4214859 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 16:48:15.907870    6732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 16:48:15.909558    6732 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-164808"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (9.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-164808 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-164808 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker: (9.495898s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (9.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-164808
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-164808: exit status 85 (667.5883ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-164808 | minikube2\jenkins | v1.28.0 | 07 Nov 22 16:48 GMT |          |
	|         | -p download-only-164808        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-164808 | minikube2\jenkins | v1.28.0 | 07 Nov 22 16:48 GMT |          |
	|         | -p download-only-164808        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 16:48:21
	Running on machine: minikube2
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 16:48:21.908640    9840 out.go:296] Setting OutFile to fd 668 ...
	I1107 16:48:21.970960    9840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:48:21.970960    9840 out.go:309] Setting ErrFile to fd 672...
	I1107 16:48:21.970960    9840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1107 16:48:21.999658    9840 root.go:311] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1107 16:48:22.007330    9840 out.go:303] Setting JSON to true
	I1107 16:48:22.009886    9840 start.go:116] hostinfo: {"hostname":"minikube2","uptime":4339,"bootTime":1667835363,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 16:48:22.009886    9840 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 16:48:22.014111    9840 out.go:97] [download-only-164808] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 16:48:22.014282    9840 notify.go:220] Checking for updates...
	I1107 16:48:22.015674    9840 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 16:48:22.018749    9840 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 16:48:22.033860    9840 out.go:169] MINIKUBE_LOCATION=15310
	I1107 16:48:22.169594    9840 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1107 16:48:22.253604    9840 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 16:48:22.254677    9840 config.go:180] Loaded profile config "download-only-164808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1107 16:48:22.254989    9840 start.go:716] api.Load failed for download-only-164808: filestore "download-only-164808": Docker machine "download-only-164808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 16:48:22.255141    9840 driver.go:365] Setting default libvirt URI to qemu:///system
	W1107 16:48:22.255295    9840 start.go:716] api.Load failed for download-only-164808: filestore "download-only-164808": Docker machine "download-only-164808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 16:48:22.533231    9840 docker.go:137] docker version: linux-20.10.20
	I1107 16:48:22.541466    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:48:23.156783    9840 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-11-07 16:48:22.6996812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:48:23.361331    9840 out.go:97] Using the docker driver based on existing profile
	I1107 16:48:23.361867    9840 start.go:282] selected driver: docker
	I1107 16:48:23.361867    9840 start.go:808] validating driver "docker" against &{Name:download-only-164808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-164808 Namespace:default APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socke
t_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:48:23.377438    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:48:23.998964    9840 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-11-07 16:48:23.5434987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:48:24.046154    9840 cni.go:95] Creating CNI manager for ""
	I1107 16:48:24.046154    9840 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 16:48:24.046154    9840 start_flags.go:317] config:
	{Name:download-only-164808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-164808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:48:24.050910    9840 out.go:97] Starting control plane node download-only-164808 in cluster download-only-164808
	I1107 16:48:24.050910    9840 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 16:48:24.053817    9840 out.go:97] Pulling base image ...
	I1107 16:48:24.053817    9840 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 16:48:24.053817    9840 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 16:48:24.087620    9840 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 16:48:24.087620    9840 cache.go:57] Caching tarball of preloaded images
	I1107 16:48:24.088575    9840 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 16:48:24.091602    9840 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1107 16:48:24.091602    9840 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I1107 16:48:24.154522    9840 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 16:48:24.245175    9840 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 16:48:24.245302    9840 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.36@sha256_8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456.tar
	I1107 16:48:24.245302    9840 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.36@sha256_8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456.tar
	I1107 16:48:24.245302    9840 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1107 16:48:24.245302    9840 image.go:63] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory, skipping pull
	I1107 16:48:24.245302    9840 image.go:104] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in cache, skipping pull
	I1107 16:48:24.245918    9840 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-164808"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (2.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.3791164s)
--- PASS: TestDownloadOnly/DeleteAll (2.38s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-164808
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-164808: (1.670762s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.67s)

                                                
                                    
x
+
TestDownloadOnlyKic (35.29s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-164837 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-164837 --force --alsologtostderr --driver=docker: (32.4231187s)
helpers_test.go:175: Cleaning up "download-docker-164837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-164837
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-164837: (1.7105766s)
--- PASS: TestDownloadOnlyKic (35.29s)

                                                
                                    
x
+
TestBinaryMirror (4.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-164912 --alsologtostderr --binary-mirror http://127.0.0.1:56950 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-164912 --alsologtostderr --binary-mirror http://127.0.0.1:56950 --driver=docker: (2.4748717s)
helpers_test.go:175: Cleaning up "binary-mirror-164912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-164912
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-164912: (1.5994999s)
--- PASS: TestBinaryMirror (4.31s)

                                                
                                    
x
+
TestOffline (176.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-181846 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-181846 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (2m42.7813128s)
helpers_test.go:175: Cleaning up "offline-docker-181846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-181846

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-181846: (13.304839s)
--- PASS: TestOffline (176.09s)

                                                
                                    
x
+
TestAddons/Setup (473.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-164917 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-164917 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m53.2968271s)
--- PASS: TestAddons/Setup (473.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 59.6195ms
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-b44hj" [be836f0c-8d0f-4814-8cb1-6b04b3bdfda4] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.1979809s
addons_test.go:368: (dbg) Run:  kubectl --context addons-164917 top pods -n kube-system
addons_test.go:385: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-164917 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:385: (dbg) Done: out/minikube-windows-amd64.exe -p addons-164917 addons disable metrics-server --alsologtostderr -v=1: (4.7087166s)
--- PASS: TestAddons/parallel/MetricsServer (10.41s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (50.61s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 43.7998ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-9fn2r" [df21c55e-641b-4c9e-a7cd-968c485854f2] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.1186948s
addons_test.go:426: (dbg) Run:  kubectl --context addons-164917 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-164917 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (43.0138447s)

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-164917 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p addons-164917 addons disable helm-tiller --alsologtostderr -v=1: (2.4245075s)
--- PASS: TestAddons/parallel/HelmTiller (50.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (95.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 38.997ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-164917 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:517: (dbg) Done: kubectl --context addons-164917 create -f testdata\csi-hostpath-driver\pvc.yaml: (3.3080596s)
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164917 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164917 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:527: (dbg) Run:  kubectl --context addons-164917 create -f testdata\csi-hostpath-driver\pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [73643ed4-d067-4af4-afdf-652735fb0c7a] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [73643ed4-d067-4af4-afdf-652735fb0c7a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [73643ed4-d067-4af4-afdf-652735fb0c7a] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 50.0659576s
addons_test.go:537: (dbg) Run:  kubectl --context addons-164917 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-164917 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:425: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:417: (dbg) Run:  kubectl --context addons-164917 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:547: (dbg) Run:  kubectl --context addons-164917 delete pod task-pv-pod
addons_test.go:547: (dbg) Done: kubectl --context addons-164917 delete pod task-pv-pod: (3.6991908s)
addons_test.go:553: (dbg) Run:  kubectl --context addons-164917 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-164917 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164917 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164917 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-164917 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:569: (dbg) Done: kubectl --context addons-164917 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml: (1.2105196s)

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [52c5af92-9016-48ca-9a5c-bf959a2fcf2a] Pending
helpers_test.go:342: "task-pv-pod-restore" [52c5af92-9016-48ca-9a5c-bf959a2fcf2a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [52c5af92-9016-48ca-9a5c-bf959a2fcf2a] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.1081495s
addons_test.go:579: (dbg) Run:  kubectl --context addons-164917 delete pod task-pv-pod-restore
addons_test.go:579: (dbg) Done: kubectl --context addons-164917 delete pod task-pv-pod-restore: (1.5274586s)
addons_test.go:583: (dbg) Run:  kubectl --context addons-164917 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-164917 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-164917 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-windows-amd64.exe -p addons-164917 addons disable csi-hostpath-driver --alsologtostderr -v=1: (10.4159871s)
addons_test.go:595: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-164917 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-windows-amd64.exe -p addons-164917 addons disable volumesnapshots --alsologtostderr -v=1: (2.3613232s)
--- PASS: TestAddons/parallel/CSI (95.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (28.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-164917 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-164917 --alsologtostderr -v=1: (4.8880204s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-mr6sb" [fbbb90be-23e9-4f11-8405-62de14d91829] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-mr6sb" [fbbb90be-23e9-4f11-8405-62de14d91829] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-mr6sb" [fbbb90be-23e9-4f11-8405-62de14d91829] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.1516597s
--- PASS: TestAddons/parallel/Headlamp (28.04s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-nj9sb" [28056468-569a-4e5a-8047-774298e41d97] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.1783493s
addons_test.go:762: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-164917

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:762: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-164917: (2.3386472s)
--- PASS: TestAddons/parallel/CloudSpanner (7.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (22.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-164917 create -f testdata\busybox.yaml
addons_test.go:606: (dbg) Done: kubectl --context addons-164917 create -f testdata\busybox.yaml: (1.328952s)
addons_test.go:613: (dbg) Run:  kubectl --context addons-164917 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [defcbd9b-fcdd-4e71-aa5c-aa15814d7e0b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [defcbd9b-fcdd-4e71-aa5c-aa15814d7e0b] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.0326496s
addons_test.go:625: (dbg) Run:  kubectl --context addons-164917 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-164917 describe sa gcp-auth-test
addons_test.go:651: (dbg) Run:  kubectl --context addons-164917 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:675: (dbg) Run:  kubectl --context addons-164917 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-164917 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-windows-amd64.exe -p addons-164917 addons disable gcp-auth --alsologtostderr -v=1: (8.9031093s)
--- PASS: TestAddons/serial/GCPAuth (22.37s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-164917
addons_test.go:135: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-164917: (13.7735362s)
addons_test.go:139: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-164917
addons_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-164917
--- PASS: TestAddons/StoppedEnableDisable (15.00s)

                                                
                                    
x
+
TestCertOptions (115.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-182644 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E1107 18:27:10.621707    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-182644 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m46.0785536s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-182644 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-182644 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.5372066s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-182644 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-182644 -- "sudo cat /etc/kubernetes/admin.conf": (1.5947246s)
helpers_test.go:175: Cleaning up "cert-options-182644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-182644
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-182644: (6.3105616s)
--- PASS: TestCertOptions (115.76s)

                                                
                                    
x
+
TestCertExpiration (329.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-182403 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-182403 --memory=2048 --cert-expiration=3m --driver=docker: (1m44.6762945s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-182403 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-182403 --memory=2048 --cert-expiration=8760h --driver=docker: (38.410428s)
helpers_test.go:175: Cleaning up "cert-expiration-182403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-182403
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-182403: (6.0886601s)
--- PASS: TestCertExpiration (329.18s)

                                                
                                    
x
+
TestDockerFlags (116.23s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-182447 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-182447 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m40.469716s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-182447 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-182447 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.5548581s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-182447 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-182447 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.5744182s)
helpers_test.go:175: Cleaning up "docker-flags-182447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-182447
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-182447: (12.633528s)
--- PASS: TestDockerFlags (116.23s)

                                                
                                    
x
+
TestForceSystemdFlag (113.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-182254 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-182254 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m41.3032201s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-182254 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-182254 ssh "docker info --format {{.CgroupDriver}}": (1.8167382s)
helpers_test.go:175: Cleaning up "force-systemd-flag-182254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-182254

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-182254: (10.3414289s)
--- PASS: TestForceSystemdFlag (113.46s)

                                                
                                    
x
+
TestForceSystemdEnv (110.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-182331 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-182331 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m32.7237332s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-182331 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-182331 ssh "docker info --format {{.CgroupDriver}}": (1.821123s)
helpers_test.go:175: Cleaning up "force-systemd-env-182331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-182331

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-182331: (16.1432129s)
--- PASS: TestForceSystemdEnv (110.69s)

                                                
                                    
x
+
TestErrorSpam/setup (82.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-165930 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-165930 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 --driver=docker: (1m22.0302192s)
error_spam_test.go:91: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.25.3."
--- PASS: TestErrorSpam/setup (82.03s)

                                                
                                    
x
+
TestErrorSpam/start (6.14s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 start --dry-run: (2.0377504s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 start --dry-run: (2.0413874s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 start --dry-run: (2.0592362s)
--- PASS: TestErrorSpam/start (6.14s)

                                                
                                    
x
+
TestErrorSpam/status (6.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 status: (2.680272s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 status: (2.1912984s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 status: (1.9527649s)
--- PASS: TestErrorSpam/status (6.83s)

                                                
                                    
x
+
TestErrorSpam/pause (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 pause: (2.1647101s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 pause: (1.5176946s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 pause: (1.5436809s)
--- PASS: TestErrorSpam/pause (5.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 unpause: (1.9717145s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 unpause: (2.2235636s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 unpause: (1.5709677s)
--- PASS: TestErrorSpam/unpause (5.77s)

                                                
                                    
x
+
TestErrorSpam/stop (22.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 stop: (13.3279318s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 stop: (4.4183645s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-165930 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-165930 stop: (4.4308838s)
--- PASS: TestErrorSpam/stop (22.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\9948\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.02s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-170143 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E1107 17:02:10.566817    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:10.582981    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:10.609963    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:10.640423    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:10.687757    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:10.781496    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:10.953194    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:11.287805    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:11.943282    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:13.228930    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:15.789922    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:20.924788    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:31.179837    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:02:51.669671    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
functional_test.go:2161: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-170143 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m37.3700959s)
--- PASS: TestFunctional/serial/StartWithProxy (97.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-170143 --alsologtostderr -v=8
E1107 17:03:32.637806    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
functional_test.go:652: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-170143 --alsologtostderr -v=8: (53.3620494s)
functional_test.go:656: soft start took 53.3630833s for "functional-170143" cluster.
--- PASS: TestFunctional/serial/SoftStart (53.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.19s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-170143 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 cache add k8s.gcr.io/pause:3.1: (2.7485284s)
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 cache add k8s.gcr.io/pause:3.3: (2.5896537s)
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 cache add k8s.gcr.io/pause:latest: (2.7033873s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-170143 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3415603046\001
functional_test.go:1070: (dbg) Done: docker build -t minikube-local-cache-test:functional-170143 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3415603046\001: (1.654901s)
functional_test.go:1082: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 cache add minikube-local-cache-test:functional-170143
functional_test.go:1082: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 cache add minikube-local-cache-test:functional-170143: (2.2650747s)
functional_test.go:1087: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 cache delete minikube-local-cache-test:functional-170143
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-170143
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh sudo crictl images
functional_test.go:1117: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh sudo crictl images: (1.531156s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (6.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1140: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh sudo docker rmi k8s.gcr.io/pause:latest: (1.4409656s)
functional_test.go:1146: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-170143 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (1.5166625s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 cache reload: (2.4430728s)
functional_test.go:1156: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1156: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (1.5109346s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (6.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.82s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 kubectl -- --context functional-170143 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out\kubectl.exe --context functional-170143 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.27s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-170143 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1107 17:04:54.565189    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
functional_test.go:750: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-170143 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.3698354s)
functional_test.go:754: restart took 52.3699973s for "functional-170143" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (52.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-170143 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 logs
functional_test.go:1229: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 logs: (3.6712235s)
--- PASS: TestFunctional/serial/LogsCmd (3.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2622645546\001\logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2622645546\001\logs.txt: (3.8928206s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-170143 config get cpus: exit status 14 (425.2359ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-170143 config get cpus: exit status 14 (438.985ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-170143 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-170143 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.8393105s)

                                                
                                                
-- stdout --
	* [functional-170143] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:06:08.722756    9912 out.go:296] Setting OutFile to fd 900 ...
	I1107 17:06:08.791294    9912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:06:08.791294    9912 out.go:309] Setting ErrFile to fd 860...
	I1107 17:06:08.791294    9912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:06:08.812627    9912 out.go:303] Setting JSON to false
	I1107 17:06:08.815628    9912 start.go:116] hostinfo: {"hostname":"minikube2","uptime":5406,"bootTime":1667835362,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 17:06:08.815784    9912 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 17:06:08.823299    9912 out.go:177] * [functional-170143] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 17:06:08.827297    9912 notify.go:220] Checking for updates...
	I1107 17:06:08.830926    9912 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 17:06:08.835760    9912 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 17:06:08.842479    9912 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:06:08.847233    9912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:06:08.850585    9912 config.go:180] Loaded profile config "functional-170143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:06:08.851389    9912 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:06:09.197072    9912 docker.go:137] docker version: linux-20.10.20
	I1107 17:06:09.206570    9912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:06:09.862071    9912 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-11-07 17:06:09.3628304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:06:10.135306    9912 out.go:177] * Using the docker driver based on existing profile
	I1107 17:06:10.140268    9912 start.go:282] selected driver: docker
	I1107 17:06:10.140268    9912 start.go:808] validating driver "docker" against &{Name:functional-170143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-170143 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:06:10.140385    9912 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:06:10.224490    9912 out.go:177] 
	W1107 17:06:10.227917    9912 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 17:06:10.229472    9912 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-170143 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:984: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-170143 --dry-run --alsologtostderr -v=1 --driver=docker: (2.2570733s)
--- PASS: TestFunctional/parallel/DryRun (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-170143 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-170143 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.6755963s)

                                                
                                                
-- stdout --
	* [functional-170143] minikube v1.28.0 sur Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:06:12.828045    7932 out.go:296] Setting OutFile to fd 964 ...
	I1107 17:06:12.932233    7932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:06:12.932233    7932 out.go:309] Setting ErrFile to fd 968...
	I1107 17:06:12.932233    7932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:06:12.955241    7932 out.go:303] Setting JSON to false
	I1107 17:06:12.958239    7932 start.go:116] hostinfo: {"hostname":"minikube2","uptime":5410,"bootTime":1667835362,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1107 17:06:12.958239    7932 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 17:06:12.962253    7932 out.go:177] * [functional-170143] minikube v1.28.0 sur Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1107 17:06:12.966244    7932 notify.go:220] Checking for updates...
	I1107 17:06:12.968243    7932 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1107 17:06:12.970258    7932 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1107 17:06:12.973275    7932 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:06:12.976233    7932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:06:12.979239    7932 config.go:180] Loaded profile config "functional-170143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:06:12.980244    7932 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:06:13.331754    7932 docker.go:137] docker version: linux-20.10.20
	I1107 17:06:13.345743    7932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:06:14.069739    7932 info.go:266] docker info: {ID:DO37:JWMT:5LGQ:222W:4QGF:FMMT:WI5L:GWNV:WV5S:J3K2:RD54:HMVK Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-11-07 17:06:13.5155383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:06:14.075741    7932 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 17:06:14.077752    7932 start.go:282] selected driver: docker
	I1107 17:06:14.077752    7932 start.go:808] validating driver "docker" against &{Name:functional-170143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-170143 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:06:14.077752    7932 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:06:14.157382    7932 out.go:177] 
	W1107 17:06:14.160415    7932 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 17:06:14.165378    7932 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (6.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 status: (2.2031763s)
functional_test.go:853: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:853: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.9616313s)
functional_test.go:865: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 status -o json
functional_test.go:865: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 status -o json: (1.9707102s)
--- PASS: TestFunctional/parallel/StatusCmd (6.14s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (122.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [8f7c661a-f83e-4502-803d-f50bbbf6ed39] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0231804s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-170143 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-170143 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-170143 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-170143 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [121c7dc7-8244-410e-a584-a3e68b338d43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [121c7dc7-8244-410e-a584-a3e68b338d43] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m43.1035416s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-170143 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-170143 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-170143 delete -f testdata/storage-provisioner/pod.yaml: (1.4896744s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-170143 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [f05cebe7-e0b0-4e41-b10d-2b5757c91d06] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [f05cebe7-e0b0-4e41-b10d-2b5757c91d06] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.1008712s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-170143 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (122.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "echo hello"
functional_test.go:1655: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "echo hello": (1.6980922s)
functional_test.go:1672: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "cat /etc/hostname"
functional_test.go:1672: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "cat /etc/hostname": (1.8159727s)
--- PASS: TestFunctional/parallel/SSHCmd (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (6.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.4344914s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh -n functional-170143 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh -n functional-170143 "sudo cat /home/docker/cp-test.txt": (1.5714574s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 cp functional-170143:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd815381149\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 cp functional-170143:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd815381149\001\cp-test.txt: (1.8922958s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh -n functional-170143 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh -n functional-170143 "sudo cat /home/docker/cp-test.txt": (1.644646s)
--- PASS: TestFunctional/parallel/CpCmd (6.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (140.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-170143 replace --force -f testdata\mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Done: kubectl --context functional-170143 replace --force -f testdata\mysql.yaml: (1.4903497s)
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-f99rm" [10ee2ecc-f435-4f1f-aceb-5be4e5882cd8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-f99rm" [10ee2ecc-f435-4f1f-aceb-5be4e5882cd8] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m36.1855913s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;": exit status 1 (631.4921ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:07:38.415515    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
functional_test.go:1734: (dbg) Run:  kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;": exit status 1 (642.4874ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;": exit status 1 (1.020408s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;": exit status 1 (1.3159428s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;": exit status 1 (1.1932216s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;": exit status 1 (831.7336ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;": exit status 1 (486.8621ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-170143 exec mysql-596b7fcdbf-f99rm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (140.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/9948/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/test/nested/copy/9948/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/test/nested/copy/9948/hosts": (1.8660801s)
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (11.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/9948.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/ssl/certs/9948.pem"
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/ssl/certs/9948.pem": (1.7946795s)
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/9948.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /usr/share/ca-certificates/9948.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /usr/share/ca-certificates/9948.pem": (1.9173942s)
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.9925754s)
functional_test.go:1926: Checking for existence of /etc/ssl/certs/99482.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/ssl/certs/99482.pem"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/ssl/certs/99482.pem": (1.9980601s)
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/99482.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /usr/share/ca-certificates/99482.pem"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /usr/share/ca-certificates/99482.pem": (2.004221s)
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.9064617s)
--- PASS: TestFunctional/parallel/CertSync (11.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-170143 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-170143 ssh "sudo systemctl is-active crio": exit status 1 (1.8581326s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-windows-amd64.exe license

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Done: out/minikube-windows-amd64.exe license: (2.6263901s)
--- PASS: TestFunctional/parallel/License (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 version --short
--- PASS: TestFunctional/parallel/Version/short (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 version -o=json --components: (2.7600451s)
--- PASS: TestFunctional/parallel/Version/components (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls --format short: (1.2752848s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-170143 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-170143
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-170143
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls --format table
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls --format table: (1.1087398s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-170143 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-170143 | bf781691a6ddd | 30B    |
| docker.io/library/nginx                     | latest            | 76c69feac34e8 | 142MB  |
| docker.io/library/nginx                     | alpine            | b997307a58ab5 | 23.6MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| docker.io/library/mysql                     | 5.7               | eef0fab001e8d | 495MB  |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| gcr.io/google-containers/addon-resizer      | functional-170143 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls --format json: (1.1201767s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-170143 image ls --format json:
[{"id":"b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23600000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da"
,"repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"bf781691a6dddc1f484801091de9f85a8d18a90db78166306863f19583dcf67d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-170143"],"size":"30"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-170143"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8
s.gcr.io/pause:3.1"],"size":"742000"},{"id":"eef0fab001e8dea739d538688b09e162bf54dd6c2bc04066bff99b5335cd6223","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls --format yaml: (1.0971632s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-170143 image ls --format yaml:
- id: eef0fab001e8dea739d538688b09e162bf54dd6c2bc04066bff99b5335cd6223
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-170143
size: "32900000"
- id: 76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: bf781691a6dddc1f484801091de9f85a8d18a90db78166306863f19583dcf67d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-170143
size: "30"
- id: b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23600000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-170143 ssh pgrep buildkitd: exit status 1 (1.4889769s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image build -t localhost/my-image:functional-170143 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image build -t localhost/my-image:functional-170143 testdata\build: (5.5240815s)
functional_test.go:316: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-170143 image build -t localhost/my-image:functional-170143 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in b48f34b1c799
Removing intermediate container b48f34b1c799
---> 8da29111e7ea
Step 3/3 : ADD content.txt /
---> 949140841acd
Successfully built 949140841acd
Successfully tagged localhost/my-image:functional-170143
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls: (1.0120063s)
E1107 17:12:10.577173    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (9.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (8.698865s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-170143
--- PASS: TestFunctional/parallel/ImageCommands/Setup (9.01s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:492: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-170143 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-170143"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:492: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-170143 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-170143": (5.0840924s)
functional_test.go:515: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-170143 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:515: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-170143 docker-env | Invoke-Expression ; docker images": (3.9088333s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (9.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.6576416s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image load --daemon gcr.io/google-containers/addon-resizer:functional-170143

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image load --daemon gcr.io/google-containers/addon-resizer:functional-170143: (17.6823384s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls: (1.4166914s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.3198219s)
functional_test.go:1311: Took "2.3198219s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "405.6328ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (2.4044871s)
functional_test.go:1362: Took "2.4046785s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "477.3849ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image load --daemon gcr.io/google-containers/addon-resizer:functional-170143

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image load --daemon gcr.io/google-containers/addon-resizer:functional-170143: (5.5144834s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls: (1.2960347s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (22.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (9.5357556s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-170143
functional_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image load --daemon gcr.io/google-containers/addon-resizer:functional-170143
functional_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image load --daemon gcr.io/google-containers/addon-resizer:functional-170143: (11.1600205s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls: (1.6197358s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (22.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image save gcr.io/google-containers/addon-resizer:functional-170143 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image save gcr.io/google-containers/addon-resizer:functional-170143 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (9.2092644s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image rm gcr.io/google-containers/addon-resizer:functional-170143
functional_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image rm gcr.io/google-containers/addon-resizer:functional-170143: (1.8792435s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls: (1.6028716s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (6.8910585s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image ls: (1.5209376s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-170143
functional_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-170143 image save --daemon gcr.io/google-containers/addon-resizer:functional-170143
E1107 17:07:10.583471    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
functional_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p functional-170143 image save --daemon gcr.io/google-containers/addon-resizer:functional-170143: (11.8753349s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-170143
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-170143 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-170143 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [fbeae54f-433a-4ae5-a55e-f8bd1d679533] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [fbeae54f-433a-4ae5-a55e-f8bd1d679533] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.1402247s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-170143 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-170143 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 6164: TerminateProcess: Access is denied.
helpers_test.go:506: unable to kill pid 8332: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:188: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-170143
functional_test.go:186: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-170143: context deadline exceeded (0s)
functional_test.go:188: failed to remove image "gcr.io/google-containers/addon-resizer:functional-170143" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-170143": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-170143
functional_test.go:194: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-170143: context deadline exceeded (0s)
functional_test.go:196: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-170143": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-170143
functional_test.go:202: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-170143: context deadline exceeded (0s)
functional_test.go:204: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-170143": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (107.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-174200 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E1107 17:42:10.598212    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-174200 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (1m47.2153247s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (107.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (60.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-174200 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-174200 addons enable ingress --alsologtostderr -v=5: (1m0.4373931s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (60.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-174200 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-174200 addons enable ingress-dns --alsologtostderr -v=5: (1.5346435s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-174544 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E1107 17:45:45.235863    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 17:45:50.365985    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 17:46:00.608784    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 17:46:21.096311    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 17:47:02.386810    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 17:47:10.594436    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-174544 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m38.5786449s)
--- PASS: TestJSONOutput/start/Command (98.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.2s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-174544 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-174544 --output=json --user=testUser: (2.2044811s)
--- PASS: TestJSONOutput/pause/Command (2.20s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.07s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-174544 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-174544 --output=json --user=testUser: (2.072296s)
--- PASS: TestJSONOutput/unpause/Command (2.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-174544 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-174544 --output=json --user=testUser: (13.7508548s)
--- PASS: TestJSONOutput/stop/Command (13.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.85s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-174746 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-174746 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (381.0033ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb2c9cad-ab06-49c5-bcda-8e4b70389b86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-174746] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8eab787f-3114-4a06-a283-042115be920f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"2c2424dc-b8c1-4f3d-a90b-1594fbbfe578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"46d6e219-3d11-4e11-9a68-fec1c9b94ccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15310"}}
	{"specversion":"1.0","id":"0c1663d0-3b08-4957-a1a4-4addae6af7ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5bd33b94-729f-4c2b-bee6-642183dc023c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-174746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-174746
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-174746: (1.4660418s)
--- PASS: TestErrorJSONOutput (1.85s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (89.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-174748 --network=
E1107 17:48:24.313445    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-174748 --network=: (1m23.4223537s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-174748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-174748
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-174748: (6.2608728s)
--- PASS: TestKicCustomNetwork/create_custom_network (89.90s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (87.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-174917 --network=bridge
E1107 17:49:49.376742    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:49.392617    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:49.408449    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:49.439535    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:49.487285    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:49.580629    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:49.750024    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:50.078610    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:50.723883    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:52.015514    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:54.581607    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:49:59.704854    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:50:09.945856    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:50:30.431667    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:50:40.037824    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-174917 --network=bridge: (1m22.2384095s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-174917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-174917
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-174917: (5.4004095s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (87.86s)

                                                
                                    
x
+
TestKicExistingNetwork (90.87s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-175046 --network=existing-network
E1107 17:51:08.165386    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 17:51:11.397237    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:51:53.821382    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:52:10.593571    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-175046 --network=existing-network: (1m24.0877301s)
helpers_test.go:175: Cleaning up "existing-network-175046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-175046
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-175046: (5.4802883s)
--- PASS: TestKicExistingNetwork (90.87s)

                                                
                                    
x
+
TestKicCustomSubnet (91.1s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-175216 --subnet=192.168.60.0/24
E1107 17:52:33.332777    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-175216 --subnet=192.168.60.0/24: (1m24.6751031s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-175216 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-175216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-175216
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-175216: (6.2233633s)
--- PASS: TestKicCustomSubnet (91.10s)

                                                
                                    
x
+
TestMainNoArgs (0.34s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.34s)

                                                
                                    
x
+
TestMinikubeProfile (182.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-175348 --driver=docker
E1107 17:54:49.372078    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-175348 --driver=docker: (1m21.2261916s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-175348 --driver=docker
E1107 17:55:17.175231    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 17:55:40.036618    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-175348 --driver=docker: (1m21.403624s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-175348
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.5761167s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-175348
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (3.2686393s)
helpers_test.go:175: Cleaning up "second-175348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-175348
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-175348: (6.9560546s)
helpers_test.go:175: Cleaning up "first-175348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-175348
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-175348: (5.9241208s)
--- PASS: TestMinikubeProfile (182.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-175650 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
E1107 17:57:10.610131    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-175650 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (20.8040602s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-175650 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-175650 ssh -- ls /minikube-host: (1.4226304s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-175650 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-175650 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (18.4790812s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-175650 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-175650 ssh -- ls /minikube-host: (1.3731391s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (4.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-175650 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-175650 --alsologtostderr -v=5: (4.5768106s)
--- PASS: TestMountStart/serial/DeleteFirst (4.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-175650 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-175650 ssh -- ls /minikube-host: (1.3658077s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.92s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-175650
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-175650: (2.9188935s)
--- PASS: TestMountStart/serial/Stop (2.92s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (14.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-175650
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-175650: (13.1080915s)
--- PASS: TestMountStart/serial/RestartStopped (14.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-175650 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-175650 ssh -- ls /minikube-host: (1.4407266s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (186.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-175805 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E1107 17:59:49.385745    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 18:00:40.039886    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-175805 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (3m4.144898s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr: (2.4714837s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (186.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (14.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.00176s)
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- rollout status deployment/busybox: (4.6547624s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-788tg -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-788tg -- nslookup kubernetes.io: (2.2537597s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-cqzcn -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-cqzcn -- nslookup kubernetes.io: (1.8599703s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-788tg -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-cqzcn -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-788tg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-cqzcn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (14.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-788tg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-788tg -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-cqzcn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-175805 -- exec busybox-65db55d5d6-cqzcn -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (4.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (67.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-175805 -v 3 --alsologtostderr
E1107 18:02:03.538368    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 18:02:10.612148    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-175805 -v 3 --alsologtostderr: (1m3.6865824s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr: (3.4254084s)
--- PASS: TestMultiNode/serial/AddNode (67.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.6174155s)
--- PASS: TestMultiNode/serial/ProfileList (1.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (50.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 status --output json --alsologtostderr: (3.3717053s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp testdata\cp-test.txt multinode-175805:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp testdata\cp-test.txt multinode-175805:/home/docker/cp-test.txt: (1.4779928s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test.txt": (1.4061556s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1576551418\001\cp-test_multinode-175805.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1576551418\001\cp-test_multinode-175805.txt: (1.3453975s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test.txt": (1.4648484s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805:/home/docker/cp-test.txt multinode-175805-m02:/home/docker/cp-test_multinode-175805_multinode-175805-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805:/home/docker/cp-test.txt multinode-175805-m02:/home/docker/cp-test_multinode-175805_multinode-175805-m02.txt: (2.0016152s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test.txt": (1.3859771s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test_multinode-175805_multinode-175805-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test_multinode-175805_multinode-175805-m02.txt": (1.4968623s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805:/home/docker/cp-test.txt multinode-175805-m03:/home/docker/cp-test_multinode-175805_multinode-175805-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805:/home/docker/cp-test.txt multinode-175805-m03:/home/docker/cp-test_multinode-175805_multinode-175805-m03.txt: (2.0798638s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test.txt": (1.447607s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test_multinode-175805_multinode-175805-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test_multinode-175805_multinode-175805-m03.txt": (1.3438166s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp testdata\cp-test.txt multinode-175805-m02:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp testdata\cp-test.txt multinode-175805-m02:/home/docker/cp-test.txt: (1.4838601s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test.txt": (1.447837s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1576551418\001\cp-test_multinode-175805-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1576551418\001\cp-test_multinode-175805-m02.txt: (1.4150738s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test.txt": (1.4049679s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m02:/home/docker/cp-test.txt multinode-175805:/home/docker/cp-test_multinode-175805-m02_multinode-175805.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m02:/home/docker/cp-test.txt multinode-175805:/home/docker/cp-test_multinode-175805-m02_multinode-175805.txt: (2.0460167s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test.txt": (1.4498274s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test_multinode-175805-m02_multinode-175805.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test_multinode-175805-m02_multinode-175805.txt": (1.5760003s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m02:/home/docker/cp-test.txt multinode-175805-m03:/home/docker/cp-test_multinode-175805-m02_multinode-175805-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m02:/home/docker/cp-test.txt multinode-175805-m03:/home/docker/cp-test_multinode-175805-m02_multinode-175805-m03.txt: (2.1050076s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test.txt": (1.4353268s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test_multinode-175805-m02_multinode-175805-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test_multinode-175805-m02_multinode-175805-m03.txt": (1.453794s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp testdata\cp-test.txt multinode-175805-m03:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp testdata\cp-test.txt multinode-175805-m03:/home/docker/cp-test.txt: (1.4825929s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test.txt": (1.4609923s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1576551418\001\cp-test_multinode-175805-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1576551418\001\cp-test_multinode-175805-m03.txt: (1.4292591s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test.txt": (1.4296089s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m03:/home/docker/cp-test.txt multinode-175805:/home/docker/cp-test_multinode-175805-m03_multinode-175805.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m03:/home/docker/cp-test.txt multinode-175805:/home/docker/cp-test_multinode-175805-m03_multinode-175805.txt: (2.1031567s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test.txt": (1.469931s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test_multinode-175805-m03_multinode-175805.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805 "sudo cat /home/docker/cp-test_multinode-175805-m03_multinode-175805.txt": (1.4942714s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m03:/home/docker/cp-test.txt multinode-175805-m02:/home/docker/cp-test_multinode-175805-m03_multinode-175805-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 cp multinode-175805-m03:/home/docker/cp-test.txt multinode-175805-m02:/home/docker/cp-test_multinode-175805-m03_multinode-175805-m02.txt: (1.9280376s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m03 "sudo cat /home/docker/cp-test.txt": (1.3628037s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test_multinode-175805-m03_multinode-175805-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 ssh -n multinode-175805-m02 "sudo cat /home/docker/cp-test_multinode-175805-m03_multinode-175805-m02.txt": (1.3887536s)
--- PASS: TestMultiNode/serial/CopyFile (50.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (8.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 node stop m03: (2.7643675s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-175805 status: exit status 7 (2.6973416s)

                                                
                                                
-- stdout --
	multinode-175805
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175805-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175805-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr: exit status 7 (2.7149965s)

                                                
                                                
-- stdout --
	multinode-175805
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175805-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175805-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 18:03:35.217136    9768 out.go:296] Setting OutFile to fd 900 ...
	I1107 18:03:35.284726    9768 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:03:35.284726    9768 out.go:309] Setting ErrFile to fd 944...
	I1107 18:03:35.284726    9768 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:03:35.296232    9768 out.go:303] Setting JSON to false
	I1107 18:03:35.296384    9768 mustload.go:65] Loading cluster: multinode-175805
	I1107 18:03:35.296492    9768 notify.go:220] Checking for updates...
	I1107 18:03:35.297182    9768 config.go:180] Loaded profile config "multinode-175805": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:03:35.297182    9768 status.go:255] checking status of multinode-175805 ...
	I1107 18:03:35.312819    9768 cli_runner.go:164] Run: docker container inspect multinode-175805 --format={{.State.Status}}
	I1107 18:03:35.524676    9768 status.go:330] multinode-175805 host status = "Running" (err=<nil>)
	I1107 18:03:35.524751    9768 host.go:66] Checking if "multinode-175805" exists ...
	I1107 18:03:35.533523    9768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-175805
	I1107 18:03:35.788718    9768 host.go:66] Checking if "multinode-175805" exists ...
	I1107 18:03:35.799662    9768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 18:03:35.806609    9768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-175805
	I1107 18:03:36.021744    9768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58856 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-175805\id_rsa Username:docker}
	I1107 18:03:36.161704    9768 ssh_runner.go:195] Run: systemctl --version
	I1107 18:03:36.187733    9768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 18:03:36.233932    9768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-175805
	I1107 18:03:36.439293    9768 kubeconfig.go:92] found "multinode-175805" server: "https://127.0.0.1:58860"
	I1107 18:03:36.439431    9768 api_server.go:165] Checking apiserver status ...
	I1107 18:03:36.452522    9768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 18:03:36.498852    9768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1774/cgroup
	I1107 18:03:36.525968    9768 api_server.go:181] apiserver freezer: "20:freezer:/docker/1aeddc47b9244499ca0ace93528c4819708ed84ec2eeb03ef138c0563979783f/kubepods/burstable/pod08fcedf98ac5bc169cb87e6dce1adaa2/8e4feb32df1a2c0da860e484fe9721a18be229ae594b80d176f4b4050f3cbc48"
	I1107 18:03:36.539379    9768 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1aeddc47b9244499ca0ace93528c4819708ed84ec2eeb03ef138c0563979783f/kubepods/burstable/pod08fcedf98ac5bc169cb87e6dce1adaa2/8e4feb32df1a2c0da860e484fe9721a18be229ae594b80d176f4b4050f3cbc48/freezer.state
	I1107 18:03:36.565620    9768 api_server.go:203] freezer state: "THAWED"
	I1107 18:03:36.565620    9768 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58860/healthz ...
	I1107 18:03:36.591061    9768 api_server.go:278] https://127.0.0.1:58860/healthz returned 200:
	ok
	I1107 18:03:36.591106    9768 status.go:421] multinode-175805 apiserver status = Running (err=<nil>)
	I1107 18:03:36.591106    9768 status.go:257] multinode-175805 status: &{Name:multinode-175805 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 18:03:36.591106    9768 status.go:255] checking status of multinode-175805-m02 ...
	I1107 18:03:36.606176    9768 cli_runner.go:164] Run: docker container inspect multinode-175805-m02 --format={{.State.Status}}
	I1107 18:03:36.809240    9768 status.go:330] multinode-175805-m02 host status = "Running" (err=<nil>)
	I1107 18:03:36.809240    9768 host.go:66] Checking if "multinode-175805-m02" exists ...
	I1107 18:03:36.817832    9768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-175805-m02
	I1107 18:03:37.024099    9768 host.go:66] Checking if "multinode-175805-m02" exists ...
	I1107 18:03:37.035025    9768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 18:03:37.043568    9768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-175805-m02
	I1107 18:03:37.258380    9768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58927 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-175805-m02\id_rsa Username:docker}
	I1107 18:03:37.396672    9768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 18:03:37.425869    9768 status.go:257] multinode-175805-m02 status: &{Name:multinode-175805-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 18:03:37.425869    9768 status.go:255] checking status of multinode-175805-m03 ...
	I1107 18:03:37.440256    9768 cli_runner.go:164] Run: docker container inspect multinode-175805-m03 --format={{.State.Status}}
	I1107 18:03:37.650973    9768 status.go:330] multinode-175805-m03 host status = "Stopped" (err=<nil>)
	I1107 18:03:37.650973    9768 status.go:343] host is not running, skipping remaining checks
	I1107 18:03:37.650973    9768 status.go:257] multinode-175805-m03 status: &{Name:multinode-175805-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (8.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 node start m03 --alsologtostderr: (31.2112363s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 status: (3.5415434s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (134.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-175805
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-175805
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-175805: (27.5481889s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-175805 --wait=true -v=8 --alsologtostderr
E1107 18:04:49.376291    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 18:05:40.042798    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 18:06:12.553251    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-175805 --wait=true -v=8 --alsologtostderr: (1m45.7804351s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-175805
--- PASS: TestMultiNode/serial/RestartKeepsNodes (134.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (13.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 node delete m03: (9.7861382s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr: (3.0099639s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (13.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 stop
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 stop: (25.1092359s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-175805 status: exit status 7 (791.5974ms)

                                                
                                                
-- stdout --
	multinode-175805
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175805-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr: exit status 7 (799.0026ms)

                                                
                                                
-- stdout --
	multinode-175805
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175805-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 18:07:06.763968    6356 out.go:296] Setting OutFile to fd 728 ...
	I1107 18:07:06.826935    6356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:07:06.826935    6356 out.go:309] Setting ErrFile to fd 912...
	I1107 18:07:06.826935    6356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 18:07:06.837934    6356 out.go:303] Setting JSON to false
	I1107 18:07:06.838081    6356 mustload.go:65] Loading cluster: multinode-175805
	I1107 18:07:06.838264    6356 notify.go:220] Checking for updates...
	I1107 18:07:06.839161    6356 config.go:180] Loaded profile config "multinode-175805": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 18:07:06.839228    6356 status.go:255] checking status of multinode-175805 ...
	I1107 18:07:06.855330    6356 cli_runner.go:164] Run: docker container inspect multinode-175805 --format={{.State.Status}}
	I1107 18:07:07.064604    6356 status.go:330] multinode-175805 host status = "Stopped" (err=<nil>)
	I1107 18:07:07.064604    6356 status.go:343] host is not running, skipping remaining checks
	I1107 18:07:07.064604    6356 status.go:257] multinode-175805 status: &{Name:multinode-175805 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 18:07:07.064681    6356 status.go:255] checking status of multinode-175805-m02 ...
	I1107 18:07:07.079044    6356 cli_runner.go:164] Run: docker container inspect multinode-175805-m02 --format={{.State.Status}}
	I1107 18:07:07.268704    6356 status.go:330] multinode-175805-m02 host status = "Stopped" (err=<nil>)
	I1107 18:07:07.268704    6356 status.go:343] host is not running, skipping remaining checks
	I1107 18:07:07.268704    6356 status.go:257] multinode-175805-m02 status: &{Name:multinode-175805-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-175805 --wait=true -v=8 --alsologtostderr --driver=docker
E1107 18:07:10.615481    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-175805 --wait=true -v=8 --alsologtostderr --driver=docker: (1m23.1651695s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-175805 status --alsologtostderr: (2.8567952s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
E1107 18:08:33.840636    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (88.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-175805
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-175805-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-175805-m02 --driver=docker: exit status 14 (428.3389ms)

                                                
                                                
-- stdout --
	* [multinode-175805-m02] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-175805-m02' is duplicated with machine name 'multinode-175805-m02' in profile 'multinode-175805'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-175805-m03 --driver=docker
E1107 18:09:49.383894    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-175805-m03 --driver=docker: (1m18.8104237s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-175805
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-175805: exit status 80 (2.1405937s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-175805
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-175805-m03 already exists in multinode-175805-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_36.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-175805-m03
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-175805-m03: (6.3361551s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (88.10s)

                                                
                                    
x
+
TestPreload (294.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-181015 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E1107 18:10:40.057708    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 18:12:10.615583    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-181015 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m19.3636301s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-181015 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-181015 -- docker pull gcr.io/k8s-minikube/busybox: (3.003506s)
preload_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-181015 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.24.6
E1107 18:14:49.387346    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
preload_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-181015 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.24.6: (2m24.4658479s)
preload_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-181015 -- docker images
preload_test.go:76: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-181015 -- docker images: (1.5779824s)
helpers_test.go:175: Cleaning up "test-preload-181015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-181015
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-181015: (6.4942121s)
--- PASS: TestPreload (294.91s)

                                                
                                    
x
+
TestScheduledStopWindows (160.84s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-181510 --memory=2048 --driver=docker
E1107 18:15:40.048260    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-181510 --memory=2048 --driver=docker: (1m27.0083602s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-181510 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-181510 --schedule 5m: (1.7966095s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-181510 -n scheduled-stop-181510
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-181510 -n scheduled-stop-181510: (1.6983607s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-181510 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-181510 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.5033954s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-181510 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-181510 --schedule 5s: (2.7899946s)
E1107 18:17:10.611525    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-181510
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-181510: exit status 7 (615.3402ms)

                                                
                                                
-- stdout --
	scheduled-stop-181510
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-181510 -n scheduled-stop-181510
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-181510 -n scheduled-stop-181510: exit status 7 (553.5306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-181510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-181510
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-181510: (4.8531046s)
--- PASS: TestScheduledStopWindows (160.84s)

                                                
                                    
x
+
TestInsufficientStorage (55.13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-181751 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-181751 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (47.2233643s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0eb53b5-e5d2-471d-8e8b-1c7b3ddbe03a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-181751] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6022956a-fcf8-4601-9912-f6252233e2f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"fc4d74dc-55dd-452c-823d-3a523e7c34d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"3bcadffb-6ba4-49a4-8e86-355380d7eab6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15310"}}
	{"specversion":"1.0","id":"2cc88d19-c63f-4cb3-84a2-595e03dbab53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27876f95-7b35-4525-a807-49e86b6798de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"915c8503-accf-46b8-ab1e-ec5c831b10a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0459cb4a-d270-47d6-b168-37dec22418cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f150662-8614-49d1-a256-7754c60e90ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"957d88da-3dda-45df-b9dc-87045c649eb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-181751 in cluster insufficient-storage-181751","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"69eec61b-04d9-49b9-9036-9193482358d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"72a790e5-2212-40f0-8463-799388295094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f95b91e3-4b4f-4c4f-8fb1-3a19d06a341f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-181751 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-181751 --output=json --layout=cluster: exit status 7 (1.3868711s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-181751","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-181751","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 18:18:40.153786    4548 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-181751" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-181751 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-181751 --output=json --layout=cluster: exit status 7 (1.4233101s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-181751","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-181751","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 18:18:41.579028    4912 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-181751" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	E1107 18:18:41.624751    4912 status.go:559] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\insufficient-storage-181751\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-181751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-181751
E1107 18:18:43.553028    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-181751: (5.0901219s)
--- PASS: TestInsufficientStorage (55.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (281.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2132421080.exe start -p running-upgrade-181846 --memory=2200 --vm-driver=docker
E1107 18:19:49.390628    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 18:20:40.061060    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2132421080.exe start -p running-upgrade-181846 --memory=2200 --vm-driver=docker: (3m12.7526348s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-181846 --memory=2200 --alsologtostderr -v=1 --driver=docker
E1107 18:22:10.623098    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-181846 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m14.337232s)
helpers_test.go:175: Cleaning up "running-upgrade-181846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-181846
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-181846: (13.2464292s)
--- PASS: TestRunningBinaryUpgrade (281.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (315.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-182538 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker
E1107 18:25:40.055497    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-182538 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (1m40.801588s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-182538
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-182538: (10.3175674s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-182538 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-182538 status --format={{.Host}}: exit status 7 (665.4456ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-182538 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-182538 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker: (1m23.123523s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-182538 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-182538 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-182538 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (962.9692ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-182538] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-182538
	    minikube start -p kubernetes-upgrade-182538 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1825382 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-182538 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-182538 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-182538 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker: (1m44.3719044s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-182538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-182538
E1107 18:30:40.062931    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-182538: (15.0180558s)
--- PASS: TestKubernetesUpgrade (315.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (276.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.256648523.exe start -p missing-upgrade-182522 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.256648523.exe start -p missing-upgrade-182522 --memory=2200 --driver=docker: (2m35.9683523s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-182522
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-182522: (21.8797603s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-182522
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-182522 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-182522 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m24.73502s)
helpers_test.go:175: Cleaning up "missing-upgrade-182522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-182522
E1107 18:29:49.398621    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-182522: (13.5070254s)
--- PASS: TestMissingContainerUpgrade (276.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (518.5802ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-181846] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (138.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --driver=docker: (2m16.8007364s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-181846 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-181846 status -o json: (1.9779392s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (138.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (299.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2787999273.exe start -p stopped-upgrade-181846 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2787999273.exe start -p stopped-upgrade-181846 --memory=2200 --vm-driver=docker: (3m50.458772s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2787999273.exe -p stopped-upgrade-181846 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2787999273.exe -p stopped-upgrade-181846 stop: (15.9495288s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-181846 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-181846 --memory=2200 --alsologtostderr -v=1 --driver=docker: (53.2361882s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (299.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --no-kubernetes --driver=docker: (25.1040525s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-181846 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-181846 status -o json: exit status 2 (1.7792916s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-181846","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-181846

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-181846: (11.153348s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.04s)

                                                
                                    
x
+
TestPause/serial/Start (105.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-182142 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-182142 --memory=2048 --install-addons=false --wait=all --driver=docker: (1m45.5544769s)
--- PASS: TestPause/serial/Start (105.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --no-kubernetes --driver=docker: (27.0445103s)
--- PASS: TestNoKubernetes/serial/Start (27.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-181846 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-181846 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.6480917s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.1125607s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (5.2871503s)
--- PASS: TestNoKubernetes/serial/ProfileList (17.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-181846
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-181846: (3.0495282s)
--- PASS: TestNoKubernetes/serial/Stop (3.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (14.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-181846 --driver=docker: (14.573241s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (14.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-181846 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-181846 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.484779s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.48s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (65.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-182142 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-182142 --alsologtostderr -v=1 --driver=docker: (1m5.6240952s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (65.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-181846
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-181846: (3.3027203s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.30s)

                                                
                                    
x
+
TestPause/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-182142 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Pause
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-182142 --alsologtostderr -v=5: (2.9652379s)
--- PASS: TestPause/serial/Pause (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.85s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-182142 --output=json --layout=cluster

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-182142 --output=json --layout=cluster: exit status 2 (1.8524377s)

                                                
                                                
-- stdout --
	{"Name":"pause-182142","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-182142","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (1.85s)

                                                
                                    
x
+
TestPause/serial/Unpause (2.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-182142 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-182142 --alsologtostderr -v=5: (2.4287335s)
--- PASS: TestPause/serial/Unpause (2.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (159.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-182839 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-182839 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (2m39.3489994s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (159.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (168.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-182933 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-182933 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3: (2m48.1749409s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (168.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (123.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-182958 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-182958 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3: (2m3.7096251s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (123.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (108.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-183055 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-183055 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3: (1m48.5928179s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (108.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (15.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-182839 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [0f01e57f-9be8-4d09-9a62-2cafd2bb0bcd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [0f01e57f-9be8-4d09-9a62-2cafd2bb0bcd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 14.0564231s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-182839 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (15.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-182839 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-182839 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1397354s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-182839 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-182839 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-182839 --alsologtostderr -v=3: (13.6629085s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-182839 -n old-k8s-version-182839
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-182839 -n old-k8s-version-182839: exit status 7 (703.2066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-182839 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (452.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-182839 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-182839 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m30.1051337s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-182839 -n old-k8s-version-182839
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-182839 -n old-k8s-version-182839: (2.1034487s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (452.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-182958 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [44ab9223-faa5-4bde-8576-c8c0d91a98b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [44ab9223-faa5-4bde-8576-c8c0d91a98b7] Running
E1107 18:32:10.619227    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0927696s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-182958 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-182958 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-182958 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1056304s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-182958 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-182958 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-182958 --alsologtostderr -v=3: (13.3911645s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-182933 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [4c7c87d1-0e03-4d94-9bbf-2db19a39e5a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [4c7c87d1-0e03-4d94-9bbf-2db19a39e5a0] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0499925s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-182933 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-182958 -n embed-certs-182958
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-182958 -n embed-certs-182958: exit status 7 (662.4863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-182958 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (349.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-182958 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-182958 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3: (5m47.4238927s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-182958 -n embed-certs-182958
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-182958 -n embed-certs-182958: (2.5138385s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (349.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-182933 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-182933 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.3361985s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-182933 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-182933 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-182933 --alsologtostderr -v=3: (13.7791105s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-183055 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [039ce24b-5f7f-4f65-9acd-65b898a8545c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
helpers_test.go:342: "busybox" [039ce24b-5f7f-4f65-9acd-65b898a8545c] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.0719187s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-183055 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-182933 -n no-preload-182933

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-182933 -n no-preload-182933: exit status 7 (662.7098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-182933 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (354.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-182933 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-182933 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3: (5m52.4276483s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-182933 -n no-preload-182933
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-182933 -n no-preload-182933: (2.5088023s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (354.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-183055 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-183055 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.6349432s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-183055 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-183055 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-183055 --alsologtostderr -v=3: (13.8383049s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055: exit status 7 (682.9008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-183055 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (391.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-183055 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3
E1107 18:34:49.399660    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 18:35:23.579398    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 18:35:40.073498    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 18:37:10.632584    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-183055 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3: (6m29.368382s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055: (2.4723127s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (391.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (101.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-xssbq" [5d581fa7-b712-4964-ab4f-496b20d00536] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-xssbq" [5d581fa7-b712-4964-ab4f-496b20d00536] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 1m41.0575168s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (101.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (75.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9gsc5" [ef02770f-432f-4204-bd97-bc9f0d70ef07] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9gsc5" [ef02770f-432f-4204-bd97-bc9f0d70ef07] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 1m15.0473352s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (75.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (35.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-r49vd" [940ce27d-d490-42a8-b7c1-8e54ba9539b8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1107 18:39:32.582196    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-r49vd" [940ce27d-d490-42a8-b7c1-8e54ba9539b8] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 35.0568962s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (35.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (92.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-5pwg8" [960f61d1-18ae-4044-b5ee-8862da53a7d3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1107 18:39:49.403175    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-5pwg8" [960f61d1-18ae-4044-b5ee-8862da53a7d3] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 1m32.0596923s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (92.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-r49vd" [940ce27d-d490-42a8-b7c1-8e54ba9539b8] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0216643s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-182839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9gsc5" [ef02770f-432f-4204-bd97-bc9f0d70ef07] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0194482s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-182933 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-xssbq" [5d581fa7-b712-4964-ab4f-496b20d00536] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0328448s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-182958 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-182839 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-182839 "sudo crictl images -o json": (2.1938557s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-182933 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-182933 "sudo crictl images -o json": (2.0852232s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (14.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-182839 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-182839 --alsologtostderr -v=1: (3.0972816s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-182839 -n old-k8s-version-182839
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-182839 -n old-k8s-version-182839: exit status 2 (1.8063087s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-182839 -n old-k8s-version-182839

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-182839 -n old-k8s-version-182839: exit status 2 (1.7987775s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-182839 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-182839 --alsologtostderr -v=1: (3.0201184s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-182839 -n old-k8s-version-182839
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-182839 -n old-k8s-version-182839: (2.2960944s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-182839 -n old-k8s-version-182839

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-182839 -n old-k8s-version-182839: (2.319731s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (14.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-182958 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-182958 "sudo crictl images -o json": (2.0919581s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (18.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-182933 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-182933 --alsologtostderr -v=1: (4.4568233s)

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-182933 -n no-preload-182933

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-182933 -n no-preload-182933: exit status 2 (2.1349325s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-182933 -n no-preload-182933

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-182933 -n no-preload-182933: exit status 2 (2.0514586s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-182933 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-182933 --alsologtostderr -v=1: (4.2351173s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-182933 -n no-preload-182933

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-182933 -n no-preload-182933: (3.2438565s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-182933 -n no-preload-182933

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-182933 -n no-preload-182933: (2.8252746s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (18.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (15.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-182958 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-182958 --alsologtostderr -v=1: (2.6164602s)

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-182958 -n embed-certs-182958

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-182958 -n embed-certs-182958: exit status 2 (1.8959616s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-182958 -n embed-certs-182958

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-182958 -n embed-certs-182958: exit status 2 (1.8734857s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-182958 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-182958 --alsologtostderr -v=1: (3.7445725s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-182958 -n embed-certs-182958

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-182958 -n embed-certs-182958: (2.5071131s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-182958 -n embed-certs-182958

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-182958 -n embed-certs-182958: (2.5046861s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (15.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (164.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-184042 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-184042 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3: (2m44.011023s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (164.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (142.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-182327 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-182327 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (2m22.569779s)
--- PASS: TestNetworkPlugins/group/auto/Start (142.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (163.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-182329 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-182329 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: (2m43.1799828s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (163.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-5pwg8" [960f61d1-18ae-4044-b5ee-8862da53a7d3] Running
E1107 18:41:19.897604    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:19.913331    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:19.928721    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:19.959633    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:20.006749    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:20.099534    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:20.272133    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:20.604466    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:21.248886    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:41:22.530154    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0306805s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-183055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-183055 "sudo crictl images -o json"
E1107 18:41:25.098848    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-183055 "sudo crictl images -o json": (1.5630562s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-183055 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-183055 --alsologtostderr -v=1: (2.359431s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055: exit status 2 (1.6199686s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055
E1107 18:41:30.232476    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055: exit status 2 (1.5547504s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-183055 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-183055 --alsologtostderr -v=1: (2.1765946s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055: (2.4404963s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-183055 -n default-k8s-diff-port-183055: (1.7996951s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (11.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-184042 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-184042 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.4709073s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-184042 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-184042 --alsologtostderr -v=3: (5.7435142s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-182327 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-182327 "pgrep -a kubelet": (1.7020183s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (27.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-182327 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-4sg4z" [6c7a209b-24b6-4327-8f5a-85a5a8459c43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-4sg4z" [6c7a209b-24b6-4327-8f5a-85a5a8459c43] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 27.0793732s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (27.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-184042 -n newest-cni-184042
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-184042 -n newest-cni-184042: exit status 7 (714.6813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-184042 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (59.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-184042 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3
E1107 18:43:43.858601    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-184042 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3: (57.6341337s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-184042 -n newest-cni-184042
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-184042 -n newest-cni-184042: (2.0926853s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (59.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
helpers_test.go:342: "kindnet-gvclx" [893b5a97-9f22-4131-933f-f5a30d7b2aec] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0554125s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-182329 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-182329 "pgrep -a kubelet": (1.6722691s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-182327 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (42.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-182329 replace --force -f testdata\netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-26kfr" [aa93687f-f898-47a8-bbc2-bee3e4f351ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1107 18:44:04.268273    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:44:06.558273    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-26kfr" [aa93687f-f898-47a8-bbc2-bee3e4f351ee] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 41.1067028s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (42.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.6982855s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-184042 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-184042 "sudo crictl images -o json": (2.7697609s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-182329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-182329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-182329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (380.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-182329 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p false-182329 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (6m20.3674246s)
--- PASS: TestNetworkPlugins/group/false/Start (380.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (357.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-182327 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
E1107 18:45:40.071198    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.
E1107 18:46:19.891625    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:46:48.123249    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.
E1107 18:47:10.629555    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 18:47:21.830586    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:47:44.516962    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:47:49.642974    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-182933\client.crt: The system cannot find the path specified.
E1107 18:48:12.325337    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-183055\client.crt: The system cannot find the path specified.
E1107 18:48:34.652085    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:34.666420    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:34.692565    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:34.724264    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:34.770716    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:34.863909    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:35.036661    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:35.370659    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:36.014298    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:37.296943    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:39.859662    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:44.981637    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:55.234450    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:48:55.828783    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:55.843988    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:55.858963    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:55.890152    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:55.937295    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:56.025012    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:56.195831    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:56.524811    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:57.176155    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:48:58.464194    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:49:01.036808    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:49:06.163412    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:49:15.730509    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:49:16.419320    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:49:36.903526    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:49:49.411338    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-174200\client.crt: The system cannot find the path specified.
E1107 18:49:56.704381    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:50:17.874465    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
E1107 18:50:40.074643    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-170143\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-182327 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (5m57.7314707s)
--- PASS: TestNetworkPlugins/group/bridge/Start (357.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-182329 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-182329 "pgrep -a kubelet": (1.7028386s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (27.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-182329 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-tgpc2" [4247ec57-54df-4b72-9e15-9fb16441723c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1107 18:51:18.630941    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-182327\client.crt: The system cannot find the path specified.
E1107 18:51:19.892734    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-182839\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-tgpc2" [4247ec57-54df-4b72-9e15-9fb16441723c] Running
E1107 18:51:39.800567    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-182329\client.crt: The system cannot find the path specified.
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 26.1161862s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (27.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-182327 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-182327 "pgrep -a kubelet": (1.6663008s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (26.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-182327 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-q45ld" [1a496794-ccfc-48fe-933f-3c65862e5547] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-q45ld" [1a496794-ccfc-48fe-933f-3c65862e5547] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 26.1087104s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (26.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (106.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-182327 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-182327 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (1m46.210107s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (106.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-182327 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-182327 "pgrep -a kubelet": (1.5617341s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (25.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-182327 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-cx84x" [114a1249-2aac-4cc9-980d-97c8a670e97c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-cx84x" [114a1249-2aac-4cc9-980d-97c8a670e97c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 25.0207085s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (25.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-182327 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (106.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-182327 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-182327 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (1m46.2053006s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (106.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-182327 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-182327 "pgrep -a kubelet": (1.4963602s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (25.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-182327 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-vz8dr" [37f54db4-7590-4e3f-af02-1d8119ae645c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-vz8dr" [37f54db4-7590-4e3f-af02-1d8119ae645c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 25.1037657s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (25.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-182327 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.50s)

                                                
                                    

Test skip (25/277)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (54.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 27.9763ms
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-dld64" [9293c642-35f7-4225-99a1-b770908ca135] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.1862195s
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-5jcp6" [c164e06b-60b5-4e11-8ee7-e046b2bf3a3c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.113469s
addons_test.go:293: (dbg) Run:  kubectl --context addons-164917 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-164917 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-164917 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (44.0828429s)
addons_test.go:308: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (54.74s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (50.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-164917 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Run:  kubectl --context addons-164917 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:185: (dbg) Done: kubectl --context addons-164917 replace --force -f testdata\nginx-ingress-v1.yaml: (2.8129886s)
addons_test.go:198: (dbg) Run:  kubectl --context addons-164917 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:198: (dbg) Done: kubectl --context addons-164917 replace --force -f testdata\nginx-pod-svc.yaml: (1.5988959s)
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [9e1e1bc5-f3a5-4bb9-8330-506e67b7462e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [9e1e1bc5-f3a5-4bb9-8330-506e67b7462e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 44.2029622s
addons_test.go:215: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-164917 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:215: (dbg) Done: out/minikube-windows-amd64.exe -p addons-164917 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.4606447s)
addons_test.go:235: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (50.43s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-170143 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:909: output didn't produce a URL
functional_test.go:903: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-170143 --alsologtostderr -v=1] ...
helpers_test.go:500: unable to terminate pid 7132: Access is denied.
E1107 17:17:10.586556    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:18:33.784981    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:22:10.584185    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:27:10.588596    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:32:10.594484    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:35:13.796272    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
E1107 17:37:10.585792    9948 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-164917\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-170143 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-170143 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-pstw6" [3fbcc286-7635-4244-8d99-7d79df3dd4c8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-6458c8fb6f-pstw6" [3fbcc286-7635-4244-8d99-7d79df3dd4c8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.0875915s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (48.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:165: (dbg) Run:  kubectl --context ingress-addon-legacy-174200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:165: (dbg) Done: kubectl --context ingress-addon-legacy-174200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.3292206s)
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-174200 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context ingress-addon-legacy-174200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [80c29c42-d387-437f-a183-3fe0b0aabb52] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [80c29c42-d387-437f-a183-3fe0b0aabb52] Running
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 34.0668679s
addons_test.go:215: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-174200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:215: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-174200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.4119312s)
addons_test.go:235: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (48.20s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-183053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-183053
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-183053: (1.6167074s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (1.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-182327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-182327

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-182327: (1.6061737s)
--- SKIP: TestNetworkPlugins/group/flannel (1.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (1.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-182329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-182329
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-182329: (1.7118412s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (1.71s)

                                                
                                    
Copied to clipboard