Test Report: Docker_Windows 14079

                    
                      798c4e8fed290cfa318a9fb994a7c6f555db39c1:2022-06-01:24222
                    
                

Test fail (10/213)

x
+
TestFunctional/parallel/ServiceCmd (1958.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220601175654-3412 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220601175654-3412 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-6c9nb" [6dff7187-bcdf-4179-b4f5-61f1663b106c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-6c9nb" [6dff7187-bcdf-4179-b4f5-61f1663b106c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.03154s
functional_test.go:1448: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service list: (6.9155626s)
functional_test.go:1462: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service --namespace=default --https --url hello-node: exit status 1 (32m0.03571s)

                                                
                                                
-- stdout --
	https://127.0.0.1:58749

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220601175654-3412 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run:  kubectl --context functional-20220601175654-3412 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name:         hello-node-54fbb85-6c9nb
Namespace:    default
Priority:     0
Node:         functional-20220601175654-3412/192.168.49.2
Start Time:   Wed, 01 Jun 2022 18:04:39 +0000
Labels:       app=hello-node
pod-template-hash=54fbb85
Annotations:  <none>
Status:       Running
IP:           172.17.0.7
IPs:
IP:           172.17.0.7
Controlled By:  ReplicaSet/hello-node-54fbb85
Containers:
echoserver:
Container ID:   docker://6a459f606dda58dc39cfd752f58f019af44834a39e66aacbb47dfbc0a96d47b5
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Wed, 01 Jun 2022 18:04:41 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2vcx9 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-2vcx9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                     Message
----    ------     ----       ----                                     -------
Normal  Scheduled  <unknown>                                           Successfully assigned default/hello-node-54fbb85-6c9nb to functional-20220601175654-3412
Normal  Pulled     32m        kubelet, functional-20220601175654-3412  Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal  Created    32m        kubelet, functional-20220601175654-3412  Created container echoserver
Normal  Started    32m        kubelet, functional-20220601175654-3412  Started container echoserver

                                                
                                                
Name:         hello-node-connect-74cf8bc446-kpkgl
Namespace:    default
Priority:     0
Node:         functional-20220601175654-3412/192.168.49.2
Start Time:   Wed, 01 Jun 2022 18:04:21 +0000
Labels:       app=hello-node-connect
pod-template-hash=74cf8bc446
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
IP:           172.17.0.6
Controlled By:  ReplicaSet/hello-node-connect-74cf8bc446
Containers:
echoserver:
Container ID:   docker://bb5e57c07e7877956fb985ba62a3029ede63712bf2511d85c045edc4cd745b8e
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Wed, 01 Jun 2022 18:04:33 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glnhl (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-glnhl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                     Message
----    ------     ----       ----                                     -------
Normal  Scheduled  <unknown>                                           Successfully assigned default/hello-node-connect-74cf8bc446-kpkgl to functional-20220601175654-3412
Normal  Pulling    32m        kubelet, functional-20220601175654-3412  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     32m        kubelet, functional-20220601175654-3412  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 10.5925535s
Normal  Created    32m        kubelet, functional-20220601175654-3412  Created container echoserver
Normal  Started    32m        kubelet, functional-20220601175654-3412  Started container echoserver

                                                
                                                
functional_test.go:1411: (dbg) Run:  kubectl --context functional-20220601175654-3412 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20220601175654-3412 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.100.28.255
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30534/TCP
Endpoints:                172.17.0.7:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601175654-3412
helpers_test.go:231: (dbg) Done: docker inspect functional-20220601175654-3412: (1.050639s)
helpers_test.go:235: (dbg) docker inspect functional-20220601175654-3412:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f",
	        "Created": "2022-06-01T17:57:47.7481206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T17:57:48.7685373Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f/hostname",
	        "HostsPath": "/var/lib/docker/containers/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f/hosts",
	        "LogPath": "/var/lib/docker/containers/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f/fcdaf16a6a52b400f39553431ba91b6d86d477c93a73030d77c141809b7d607f-json.log",
	        "Name": "/functional-20220601175654-3412",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220601175654-3412:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220601175654-3412",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d711ffab7b1d4af0fe3d4b62a0d56d2fcd50c181ce03af8a939b4c2a212e46c2-init/diff:/var/lib/docker/overlay2/487b259deb346e6ca1e96023cfc1832638489725b45384e10e2c2effe462993c/diff:/var/lib/docker/overlay2/7830a7ee158a10893945c1b577efeb821d499cce7646d95d3c0cffb3ed372dca/diff:/var/lib/docker/overlay2/6fe83b204fd4124b69c52dc2b8620b75ac92764b58a8d1af6662ff240e517719/diff:/var/lib/docker/overlay2/6362560b46c9fab8d6514c8429f6275481f64020b6a76226333ec63d40b3509c/diff:/var/lib/docker/overlay2/b947dedac2c38cb9982c9b363e89606d658250ef2798320fdf3517f747048abd/diff:/var/lib/docker/overlay2/bc2839e6d5fd56592e9530bb7f1f81ed9502bdb7539e7f429732e9cf4cd3b17d/diff:/var/lib/docker/overlay2/1b3239e13a55e9fa626a7541842d884445974471039cc2d9226ad10f2b953536/diff:/var/lib/docker/overlay2/1884c2d81ecac540a3174fb86cefef2fd199eaa5c78d29afe6c63aff263f9584/diff:/var/lib/docker/overlay2/d1c361312180db411937b7786e1329e12f9ed7b9439d4574d6d9a237a8ef8a9e/diff:/var/lib/docker/overlay2/15125b
9e77872950f8bc77e7ec27026feb64d93311200f76586c570bbceb3810/diff:/var/lib/docker/overlay2/1778c10167346a2b58dd494e4689512b56050eed4b6df53a451f9aa373c3af35/diff:/var/lib/docker/overlay2/e45fa45d984d0fdd2eaca3b15c5e81abaa51b6b84fc051f20678d16cb6548a34/diff:/var/lib/docker/overlay2/54cea2bf354fab8e2c392a574195b06b919122ff6a1fb01b05f554ba43d9719e/diff:/var/lib/docker/overlay2/8667e3403c29f1a18aaababc226712f548d7dd623a4b9ac413520cf72955fb40/diff:/var/lib/docker/overlay2/5d20284be4fd7015d5b8eb6ae55b108a262e3c66cdaa9a8c4c23a6eb1726d4da/diff:/var/lib/docker/overlay2/d623242b443d7de7f75761cda756115d0f9df9f3b73144554928ceac06876d5b/diff:/var/lib/docker/overlay2/143dd7f527aa222e0eeaafe5e0182140c95e402aa335e7994b2aa7f1e6b6ba3c/diff:/var/lib/docker/overlay2/d690aea98cc6cb39fdd3f6660997b792085628157b14d576701adc72d3e6cf55/diff:/var/lib/docker/overlay2/2bb1d07709342e3bcb4feda7dc7d17fa9707986bf88cd7dc52eab255748276e0/diff:/var/lib/docker/overlay2/ea79e7f8097cf29c435b8a18ee6332b067ec4f7858b6eaabf897d2076a8deb3e/diff:/var/lib/d
ocker/overlay2/dab209c0bb58d228f914118438b0a79649c46857e6fcb416c0c556c049154f5d/diff:/var/lib/docker/overlay2/3bd421aaea3202bb8715cdd0f452aa411f20f2025b05d6a03811ebc7d0347896/diff:/var/lib/docker/overlay2/7dc112f5a6dc7809e579b2eaaeef54d3d5ee1326e9f35817dad641bc4e2c095a/diff:/var/lib/docker/overlay2/772b23d424621d351ce90f47e351441dc7fb224576441813bb86be52c0552022/diff:/var/lib/docker/overlay2/86ea33f163c6d58acb53a8e5bb27e1c131a6c915d7459ca03c90383b299fde58/diff:/var/lib/docker/overlay2/58deaba6fb571643d48dd090dd850eeb8fd343f41591580f4509fe61280e87de/diff:/var/lib/docker/overlay2/d8e5be8b94fe5858e777434bd7d360128719def82a5e7946fd4cb69aecab39fe/diff:/var/lib/docker/overlay2/a319e02b15899f20f933362a00fa40be829441edea2a0be36cc1e30b3417cf57/diff:/var/lib/docker/overlay2/b315efdf7f2b5f50f74664829533097f21ab8bda14478b76e9b5781079830b20/diff:/var/lib/docker/overlay2/bb96faec132eb5919c94fc772f61e63514308af6f72ec141483a94a85a77cc3b/diff:/var/lib/docker/overlay2/55dbff36528117ad96b3be9ee2396f7faee2f0b493773aa5abf5ba2b23a
5f728/diff:/var/lib/docker/overlay2/f11da52264a1f34c3b2180d2adcbcb7cc077c7f91611974bf0946d6bea248de5/diff:/var/lib/docker/overlay2/6ca19b0a8327fcd8f60b06c6b0f4519ff5f0f3eacd034e6c5c16ed45239f2238/diff:/var/lib/docker/overlay2/f86ed588a9cb5b359a174312bf8595e8e896ba3d8922b0bae1d8839518d24fb6/diff:/var/lib/docker/overlay2/0bf0e1906e62c903f71626646e2339b8e2c809d40948898d803dcaf0218ed0dd/diff:/var/lib/docker/overlay2/c8ff277ec5a9fa0db24ad64c7e0523b2b5a5c7d64f2148a0c9823fdd5bc60cad/diff:/var/lib/docker/overlay2/4cfbf9fc2a4a968773220ae74312f07a616afc80cbf9a4b68e2c2357c09ca009/diff:/var/lib/docker/overlay2/9a235e4b15bee3f10260f9356535723bf351a49b1f19af094d59a1439b7a9632/diff:/var/lib/docker/overlay2/9699d245a454ce1e21f1ac875a0910a63fb34d3d2870f163d8b6d258f33c2f4f/diff:/var/lib/docker/overlay2/6e093a9dfe282a2a53a4081251541e0c5b4176bb42d9c9bf908f19b1fdc577f5/diff:/var/lib/docker/overlay2/98036438a55a1794d298c11dc1eb0633e06ed433b84d24a3972e634a0b11deb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d711ffab7b1d4af0fe3d4b62a0d56d2fcd50c181ce03af8a939b4c2a212e46c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d711ffab7b1d4af0fe3d4b62a0d56d2fcd50c181ce03af8a939b4c2a212e46c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d711ffab7b1d4af0fe3d4b62a0d56d2fcd50c181ce03af8a939b4c2a212e46c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220601175654-3412",
	                "Source": "/var/lib/docker/volumes/functional-20220601175654-3412/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220601175654-3412",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220601175654-3412",
	                "name.minikube.sigs.k8s.io": "functional-20220601175654-3412",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "42b735bd10a83f995a09f31f236acd7116ce6887781c1e4894ffa72ada936b18",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58390"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58391"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58392"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/42b735bd10a8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220601175654-3412": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "fcdaf16a6a52",
	                        "functional-20220601175654-3412"
	                    ],
	                    "NetworkID": "db9a83a2c966b245ab10d1a9620cf47ee96af9f394e8fcf24c9b12fc208bb76c",
	                    "EndpointID": "79daea23eb5020922fd179fb1458a617d7253a0ea49311a3c3846f0edf0dd161",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601175654-3412 -n functional-20220601175654-3412
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601175654-3412 -n functional-20220601175654-3412: (6.2857054s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 logs -n 25: (8.2058881s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
	|    Command     |                                                Args                                                 |            Profile             |       User        |    Version     |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
	| ssh            | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | ssh sudo cat                                                                                        |                                |                   |                |                     |                     |
	|                | /etc/ssl/certs/3ec20f2e.0                                                                           |                                |                   |                |                     |                     |
	| cp             | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | cp testdata\cp-test.txt                                                                             |                                |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412 image load --daemon                                                  | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220601175654-3412                               |                                |                   |                |                     |                     |
	| ssh            | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | ssh -n                                                                                              |                                |                   |                |                     |                     |
	|                | functional-20220601175654-3412                                                                      |                                |                   |                |                     |                     |
	|                | sudo cat                                                                                            |                                |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| cp             | functional-20220601175654-3412 cp functional-20220601175654-3412:/home/docker/cp-test.txt           | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3371800001\001\cp-test.txt |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412 image save                                                           | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220601175654-3412                               |                                |                   |                |                     |                     |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar                              |                                |                   |                |                     |                     |
	| ssh            | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | ssh -n                                                                                              |                                |                   |                |                     |                     |
	|                | functional-20220601175654-3412                                                                      |                                |                   |                |                     |                     |
	|                | sudo cat                                                                                            |                                |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412 image rm                                                             | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220601175654-3412                               |                                |                   |                |                     |                     |
	| addons         | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | addons list                                                                                         |                                |                   |                |                     |                     |
	| addons         | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | addons list -o json                                                                                 |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412 image load                                                           | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:03 GMT | 01 Jun 22 18:03 GMT |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar                              |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412 image save --daemon                                                  | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220601175654-3412                               |                                |                   |                |                     |                     |
	| update-context | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
	|                | update-context                                                                                      |                                |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |                |                     |                     |
	| service        | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
	|                | service list                                                                                        |                                |                   |                |                     |                     |
	| update-context | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:04 GMT |
	|                | update-context                                                                                      |                                |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |                |                     |                     |
	| update-context | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:04 GMT | 01 Jun 22 18:05 GMT |
	|                | update-context                                                                                      |                                |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
	|                | image ls --format short                                                                             |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
	|                | image ls --format yaml                                                                              |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412 image build -t                                                       | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
	|                | localhost/my-image:functional-20220601175654-3412                                                   |                                |                   |                |                     |                     |
	|                | testdata\build                                                                                      |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
	|                | image ls --format json                                                                              |                                |                   |                |                     |                     |
	| image          | functional-20220601175654-3412                                                                      | functional-20220601175654-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 18:05 GMT | 01 Jun 22 18:05 GMT |
	|                | image ls --format table                                                                             |                                |                   |                |                     |                     |
	|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 18:02:27
	Running on machine: minikube4
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 18:02:27.003550    8928 out.go:296] Setting OutFile to fd 992 ...
	I0601 18:02:27.059177    8928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 18:02:27.059177    8928 out.go:309] Setting ErrFile to fd 712...
	I0601 18:02:27.059177    8928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 18:02:27.071179    8928 out.go:303] Setting JSON to false
	I0601 18:02:27.074171    8928 start.go:115] hostinfo: {"hostname":"minikube4","uptime":66662,"bootTime":1654039885,"procs":169,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 18:02:27.074171    8928 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 18:02:27.077211    8928 out.go:177] * [functional-20220601175654-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 18:02:27.080206    8928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 18:02:27.082181    8928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 18:02:27.085184    8928 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 18:02:27.087175    8928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 18:02:27.089183    8928 config.go:178] Loaded profile config "functional-20220601175654-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 18:02:27.090206    8928 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 18:02:29.982611    8928 docker.go:137] docker version: linux-20.10.14
	I0601 18:02:29.989615    8928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 18:02:32.127496    8928 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.137771s)
	I0601 18:02:32.128411    8928 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:61 OomKillDisable:true NGoroutines:62 SystemTime:2022-06-01 18:02:31.068937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 18:02:32.132513    8928 out.go:177] * Using the docker driver based on existing profile
	I0601 18:02:32.135191    8928 start.go:284] selected driver: docker
	I0601 18:02:32.135191    8928 start.go:806] validating driver "docker" against &{Name:functional-20220601175654-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601175654-3412 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 18:02:32.135191    8928 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 18:02:32.155422    8928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 18:02:34.213193    8928 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0575405s)
	I0601 18:02:34.213337    8928 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-01 18:02:33.2036185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 18:02:34.258691    8928 cni.go:95] Creating CNI manager for ""
	I0601 18:02:34.258691    8928 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 18:02:34.258691    8928 start_flags.go:306] config:
	{Name:functional-20220601175654-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601175654-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true s
torage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 18:02:34.262480    8928 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 17:57:49 UTC, end at Wed 2022-06-01 18:37:08 UTC. --
	Jun 01 17:59:05 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T17:59:05.029235700Z" level=info msg="ignoring event" container=2ce0e4b38ac9f04643054592aef152247b94aae05e441b0889a44932c71646b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.626025400Z" level=info msg="ignoring event" container=e488dffa538a17d992d20869cc00e78495d271a2451bbb95f332b9135ed6c4ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.637360900Z" level=info msg="ignoring event" container=c7bb2447fa954e0f80325f625506f591267a93d4006a17435d6f60339e195cd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.637419900Z" level=info msg="ignoring event" container=dc621d8e12bb731b9e13f7ae612a8e8abdf02c5d57b50537217ea82c9f40ea93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.637628500Z" level=info msg="ignoring event" container=3253cbf1f4fe95a99732cce1ed9d390cd32de41ee4445e6aa46737745c931a0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.736178000Z" level=info msg="ignoring event" container=cb3703c9cc9afd86e857f1f5379232c178abe81a94549714e3a7ee8e262075bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.828329200Z" level=info msg="ignoring event" container=6ec6e3224eb37da6c6e69453bed14d1b984679895b31b867dd26575c033d0777 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.828774500Z" level=info msg="ignoring event" container=49aa6506b35f2e342010fe2d637e7622ef30d37f5be470c464871bb6c877ac88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.833784800Z" level=info msg="ignoring event" container=fd5dc025117b1e848f61caf495dc31bb95016f08def18176c72d985bea9b01fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.836713100Z" level=info msg="ignoring event" container=39af5bb7132841c3549814e4089e8ccab9cd4dcaa3f317ef2376ef52cfd9d5b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.926707900Z" level=info msg="ignoring event" container=54088515f3b1f2404dc743e506c50bb29f3bb7ccce2e493e1cd7878f9c2152dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:08 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:08.929511800Z" level=info msg="ignoring event" container=c85d97d43a303c9dfc5d9402d2af2d0a181576f005b3c743a841b2cea4699d18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:10 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:10.628746400Z" level=info msg="ignoring event" container=be5f34a9364f1696bb7ce89806fc3241fe5cd8dd9f251a707ad487ca889cc29e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:11 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:11.138211300Z" level=info msg="ignoring event" container=1d64c64b4d6316f982234fe788dbd2f7eea1b7bb4b882e0d7a7609c53aa3eca7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:11 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:11.330436200Z" level=info msg="ignoring event" container=d42dad7731a020c158dd22012ba5aab4c0f5071c8fffef9e20fc3ea1587e66b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:13 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:13.433350100Z" level=info msg="ignoring event" container=8d9da6f17adb539a53056a3d32608b9689314dd2abf921647b1813a9d2e24fcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:23 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:23.735898300Z" level=info msg="ignoring event" container=3cb91cda9605cf6855868db610bd5e0c407deef941823fda7c4cb3588cb002c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:24 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:24.743151000Z" level=info msg="ignoring event" container=b596035b2d603ed478d3b289168127e1479b806e29e73f62d30109d4b076dae0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:24 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:24.940279000Z" level=info msg="ignoring event" container=4a4e5a3bb4ea866446cc1b3437e2462ebbc2277a5bc597bbbd9d38adb928fa81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:25 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:25.146467200Z" level=info msg="ignoring event" container=249f3b6cebd0c3db2be3865994712e1487ff6b8b8d2b2931d0a40598b885b94a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:01:25 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:01:25.852056800Z" level=info msg="ignoring event" container=34197b3df9eb24955f2e2148de3a663881e459c57bca2d06639f836350b00930 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:04:29 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:04:29.437397600Z" level=info msg="ignoring event" container=428f2721d941ee9d29bee164b4a1e72f74826bea609f3fd8f3b28943beaba0f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:04:30 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:04:30.842686800Z" level=info msg="ignoring event" container=e2ab857ff931d067ffedd71ba632df14981b57ce80d29351a49991e38c08c79c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:05:22 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:05:22.827220500Z" level=info msg="ignoring event" container=815eebbfb5932518e2a8ae234e1a7d9fd526c3d612fce947e84ab01236dfb725 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:05:23 functional-20220601175654-3412 dockerd[509]: time="2022-06-01T18:05:23.452415400Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	6a459f606dda5       82e4c8a736a4f                                                                                   32 minutes ago      Running             echoserver                0                   2af19e9c1aff9
	3f4995e01d5df       nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514                   32 minutes ago      Running             myfrontend                0                   d31489858fe70
	bb5e57c07e787       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   32 minutes ago      Running             echoserver                0                   0a3729a925ea3
	c2fee58ceb383       nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989                   33 minutes ago      Running             nginx                     0                   91606cdc227db
	fc1ceb8f5f911       mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5                   33 minutes ago      Running             mysql                     0                   2f251b72b2974
	e059ac677a6c9       6e38f40d628db                                                                                   35 minutes ago      Running             storage-provisioner       3                   1f3103e42f8a7
	2efa890199e80       df7b72818ad2e                                                                                   35 minutes ago      Running             kube-controller-manager   2                   0255071d48864
	9d9490ac77090       a4ca41631cc7a                                                                                   35 minutes ago      Running             coredns                   1                   575aa12ff7773
	ce413f7f994b9       8fa62c12256df                                                                                   35 minutes ago      Running             kube-apiserver            1                   2570071be0701
	249f3b6cebd0c       6e38f40d628db                                                                                   35 minutes ago      Exited              storage-provisioner       2                   1f3103e42f8a7
	3cb91cda9605c       8fa62c12256df                                                                                   35 minutes ago      Exited              kube-apiserver            0                   2570071be0701
	f8476e9b4b726       595f327f224a4                                                                                   35 minutes ago      Running             kube-scheduler            1                   03e84dc342a31
	26e97c628a456       25f8c7f3da61c                                                                                   35 minutes ago      Running             etcd                      1                   d73eb68b51a3b
	9ee2f9d1ae9d7       4c03754524064                                                                                   35 minutes ago      Running             kube-proxy                1                   3ee0efaba4f8b
	34197b3df9eb2       df7b72818ad2e                                                                                   35 minutes ago      Exited              kube-controller-manager   1                   0255071d48864
	8d9da6f17adb5       a4ca41631cc7a                                                                                   38 minutes ago      Exited              coredns                   0                   fd5dc025117b1
	54088515f3b1f       4c03754524064                                                                                   38 minutes ago      Exited              kube-proxy                0                   6ec6e3224eb37
	c85d97d43a303       25f8c7f3da61c                                                                                   38 minutes ago      Exited              etcd                      0                   3253cbf1f4fe9
	1d64c64b4d631       595f327f224a4                                                                                   38 minutes ago      Exited              kube-scheduler            0                   39af5bb713284
	
	* 
	* ==> coredns [8d9da6f17adb] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [9d9490ac7709] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220601175654-3412
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220601175654-3412
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=functional-20220601175654-3412
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T17_58_39_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 17:58:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220601175654-3412
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 18:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 18:36:39 +0000   Wed, 01 Jun 2022 17:58:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 18:36:39 +0000   Wed, 01 Jun 2022 17:58:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 18:36:39 +0000   Wed, 01 Jun 2022 17:58:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 18:36:39 +0000   Wed, 01 Jun 2022 17:58:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220601175654-3412
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                e0d7477b601740b2a7c32c13851e505c
	  Boot ID:                    3154680d-09d7-4698-9003-0db79e83a883
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-6c9nb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32m
	  default                     hello-node-connect-74cf8bc446-kpkgl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32m
	  default                     mysql-b87c45988-9rpl2                                     600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     34m
	  default                     nginx-svc                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33m
	  default                     sp-pod                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32m
	  kube-system                 coredns-64897985d-jnnzd                                   100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-20220601175654-3412                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-20220601175654-3412             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  kube-system                 kube-controller-manager-functional-20220601175654-3412    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-6vsfj                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-20220601175654-3412             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 35m                kube-proxy  
	  Normal  Starting                 38m                kube-proxy  
	  Normal  NodeHasNoDiskPressure    38m (x5 over 38m)  kubelet     Node functional-20220601175654-3412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m (x5 over 38m)  kubelet     Node functional-20220601175654-3412 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  38m (x6 over 38m)  kubelet     Node functional-20220601175654-3412 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38m                kubelet     Node functional-20220601175654-3412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet     Node functional-20220601175654-3412 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             38m                kubelet     Node functional-20220601175654-3412 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  38m                kubelet     Node functional-20220601175654-3412 status is now: NodeHasSufficientMemory
	  Normal  Starting                 38m                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  38m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                38m                kubelet     Node functional-20220601175654-3412 status is now: NodeReady
	  Normal  Starting                 35m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  35m (x3 over 35m)  kubelet     Node functional-20220601175654-3412 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35m (x3 over 35m)  kubelet     Node functional-20220601175654-3412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35m (x3 over 35m)  kubelet     Node functional-20220601175654-3412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 18:11] WSL2: Performing memory compaction.
	[Jun 1 18:12] WSL2: Performing memory compaction.
	[Jun 1 18:13] WSL2: Performing memory compaction.
	[Jun 1 18:14] WSL2: Performing memory compaction.
	[Jun 1 18:15] WSL2: Performing memory compaction.
	[Jun 1 18:16] WSL2: Performing memory compaction.
	[Jun 1 18:17] WSL2: Performing memory compaction.
	[Jun 1 18:18] WSL2: Performing memory compaction.
	[Jun 1 18:19] WSL2: Performing memory compaction.
	[Jun 1 18:20] WSL2: Performing memory compaction.
	[Jun 1 18:21] WSL2: Performing memory compaction.
	[Jun 1 18:22] WSL2: Performing memory compaction.
	[Jun 1 18:23] WSL2: Performing memory compaction.
	[Jun 1 18:24] WSL2: Performing memory compaction.
	[Jun 1 18:25] WSL2: Performing memory compaction.
	[Jun 1 18:27] WSL2: Performing memory compaction.
	[Jun 1 18:28] WSL2: Performing memory compaction.
	[Jun 1 18:29] WSL2: Performing memory compaction.
	[Jun 1 18:30] WSL2: Performing memory compaction.
	[Jun 1 18:31] WSL2: Performing memory compaction.
	[Jun 1 18:32] WSL2: Performing memory compaction.
	[Jun 1 18:33] WSL2: Performing memory compaction.
	[Jun 1 18:34] WSL2: Performing memory compaction.
	[Jun 1 18:35] WSL2: Performing memory compaction.
	[Jun 1 18:36] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [26e97c628a45] <==
	* {"level":"info","ts":"2022-06-01T18:04:13.935Z","caller":"traceutil/trace.go:171","msg":"trace[254918965] transaction","detail":"{read_only:false; response_revision:806; number_of_response:1; }","duration":"869.3601ms","start":"2022-06-01T18:04:13.065Z","end":"2022-06-01T18:04:13.935Z","steps":["trace[254918965] 'process raft request'  (duration: 769.1438ms)","trace[254918965] 'compare'  (duration: 99.8969ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T18:04:13.935Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T18:04:13.065Z","time spent":"869.7186ms","remote":"127.0.0.1:45898","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:798 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128013403777340701 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
	{"level":"warn","ts":"2022-06-01T18:04:20.921Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"374.1672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/nginx-svc\" ","response":"range_response_count:1 size:1128"}
	{"level":"warn","ts":"2022-06-01T18:04:20.922Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"466.248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T18:04:20.922Z","caller":"traceutil/trace.go:171","msg":"trace[1298226982] range","detail":"{range_begin:/registry/services/specs/default/nginx-svc; range_end:; response_count:1; response_revision:812; }","duration":"374.327ms","start":"2022-06-01T18:04:20.547Z","end":"2022-06-01T18:04:20.922Z","steps":["trace[1298226982] 'range keys from in-memory index tree'  (duration: 374.0663ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T18:04:20.922Z","caller":"traceutil/trace.go:171","msg":"trace[30369322] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:812; }","duration":"466.2916ms","start":"2022-06-01T18:04:20.455Z","end":"2022-06-01T18:04:20.922Z","steps":["trace[30369322] 'range keys from in-memory index tree'  (duration: 465.7698ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T18:04:20.922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T18:04:20.455Z","time spent":"466.3354ms","remote":"127.0.0.1:45992","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-01T18:04:20.922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T18:04:20.547Z","time spent":"374.3843ms","remote":"127.0.0.1:45954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":1152,"request content":"key:\"/registry/services/specs/default/nginx-svc\" "}
	{"level":"warn","ts":"2022-06-01T18:04:20.922Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"250.7619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/default/\" range_end:\"/registry/resourcequotas/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T18:04:20.922Z","caller":"traceutil/trace.go:171","msg":"trace[953612809] range","detail":"{range_begin:/registry/resourcequotas/default/; range_end:/registry/resourcequotas/default0; response_count:0; response_revision:812; }","duration":"250.8126ms","start":"2022-06-01T18:04:20.671Z","end":"2022-06-01T18:04:20.922Z","steps":["trace[953612809] 'range keys from in-memory index tree'  (duration: 250.589ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T18:04:21.086Z","caller":"traceutil/trace.go:171","msg":"trace[102993881] transaction","detail":"{read_only:false; response_revision:814; number_of_response:1; }","duration":"130.1139ms","start":"2022-06-01T18:04:20.956Z","end":"2022-06-01T18:04:21.086Z","steps":["trace[102993881] 'process raft request'  (duration: 115.8874ms)","trace[102993881] 'compare'  (duration: 14.0267ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T18:04:21.557Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.0244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T18:04:21.557Z","caller":"traceutil/trace.go:171","msg":"trace[187544958] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:831; }","duration":"104.1873ms","start":"2022-06-01T18:04:21.453Z","end":"2022-06-01T18:04:21.557Z","steps":["trace[187544958] 'agreement among raft nodes before linearized reading'  (duration: 81.6503ms)","trace[187544958] 'range keys from in-memory index tree'  (duration: 22.3404ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T18:11:26.892Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":975}
	{"level":"info","ts":"2022-06-01T18:11:26.894Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":975,"took":"1.2789ms"}
	{"level":"info","ts":"2022-06-01T18:16:26.922Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1186}
	{"level":"info","ts":"2022-06-01T18:16:26.923Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1186,"took":"1.0149ms"}
	{"level":"info","ts":"2022-06-01T18:21:26.966Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1395}
	{"level":"info","ts":"2022-06-01T18:21:26.967Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1395,"took":"581.5µs"}
	{"level":"info","ts":"2022-06-01T18:26:26.998Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1605}
	{"level":"info","ts":"2022-06-01T18:26:26.999Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1605,"took":"631.8µs"}
	{"level":"info","ts":"2022-06-01T18:31:27.029Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1816}
	{"level":"info","ts":"2022-06-01T18:31:27.030Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1816,"took":"651.9µs"}
	{"level":"info","ts":"2022-06-01T18:36:27.056Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2024}
	{"level":"info","ts":"2022-06-01T18:36:27.057Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2024,"took":"627.8µs"}
	
	* 
	* ==> etcd [c85d97d43a30] <==
	* {"level":"info","ts":"2022-06-01T17:58:30.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T17:58:30.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T17:58:30.037Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220601175654-3412 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T17:58:30.041Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T17:58:30.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T17:58:30.042Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T17:58:30.043Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T17:58:30.043Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T17:58:30.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-06-01T17:58:52.436Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.1306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3074"}
	{"level":"info","ts":"2022-06-01T17:58:52.436Z","caller":"traceutil/trace.go:171","msg":"trace[2006106820] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:396; }","duration":"101.3657ms","start":"2022-06-01T17:58:52.335Z","end":"2022-06-01T17:58:52.436Z","steps":["trace[2006106820] 'agreement among raft nodes before linearized reading'  (duration: 100.9958ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T17:58:53.322Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.2369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-zhgqb\" ","response":"range_response_count:1 size:3472"}
	{"level":"info","ts":"2022-06-01T17:58:53.322Z","caller":"traceutil/trace.go:171","msg":"trace[870259502] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-zhgqb; range_end:; response_count:1; response_revision:434; }","duration":"100.4721ms","start":"2022-06-01T17:58:53.221Z","end":"2022-06-01T17:58:53.322Z","steps":["trace[870259502] 'range keys from in-memory index tree'  (duration: 100.1141ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T18:01:08.430Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-01T18:01:08.430Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220601175654-3412","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/01 18:01:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/01 18:01:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-01T18:01:08.527Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-01T18:01:08.633Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T18:01:08.635Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T18:01:08.635Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220601175654-3412","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:37:09 up 58 min,  0 users,  load average: 0.27, 0.27, 0.37
	Linux functional-20220601175654-3412 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3cb91cda9605] <==
	* I0601 18:01:23.667946       1 server.go:565] external host was not specified, using 192.168.49.2
	I0601 18:01:23.669104       1 server.go:172] Version: v1.23.6
	E0601 18:01:23.669749       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-apiserver [ce413f7f994b] <==
	* I0601 18:02:41.250133       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 18:02:41.548927       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 18:02:57.328181       1 trace.go:205] Trace[1506951188]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (01-Jun-2022 18:02:56.328) (total time: 999ms):
	Trace[1506951188]: [999.2698ms] [999.2698ms] END
	I0601 18:02:57.329058       1 trace.go:205] Trace[1480288553]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:f3bd5c24-4c59-4397-90f6-e54ae86deb0b,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 18:02:56.328) (total time: 1000ms):
	Trace[1480288553]: ---"Listing from storage done" 999ms (18:02:57.328)
	Trace[1480288553]: [1.0002692s] [1.0002692s] END
	{"level":"warn","ts":"2022-06-01T18:03:20.442Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ca1c0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2022-06-01T18:03:20.451Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ca1c0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I0601 18:03:20.552152       1 trace.go:205] Trace[1465098381]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:d3639560-931e-4eb1-b10b-67e20a9b8cbd,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 18:03:18.758) (total time: 1793ms):
	Trace[1465098381]: ---"About to write a response" 1793ms (18:03:20.551)
	Trace[1465098381]: [1.7933127s] [1.7933127s] END
	I0601 18:03:20.552419       1 trace.go:205] Trace[1519418264]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (01-Jun-2022 18:03:18.333) (total time: 2219ms):
	Trace[1519418264]: [2.2191318s] [2.2191318s] END
	I0601 18:03:20.553198       1 trace.go:205] Trace[237103339]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:3b310fa1-aa4d-4309-a7ec-5d1ae8cbb15f,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 18:03:18.333) (total time: 2219ms):
	Trace[237103339]: ---"Listing from storage done" 2219ms (18:03:20.552)
	Trace[237103339]: [2.2199462s] [2.2199462s] END
	I0601 18:03:48.854316       1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.108.73.94]
	I0601 18:04:13.936263       1 trace.go:205] Trace[630384542]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (01-Jun-2022 18:04:13.061) (total time: 874ms):
	Trace[630384542]: ---"Transaction committed" 870ms (18:04:13.936)
	Trace[630384542]: [874.9458ms] [874.9458ms] END
	I0601 18:04:21.409539       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.98.95.14]
	I0601 18:04:40.093795       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.100.28.255]
	W0601 18:17:49.374786       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0601 18:35:57.787024       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	
	* 
	* ==> kube-controller-manager [2efa890199e8] <==
	* I0601 18:01:44.328872       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 18:01:44.329416       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 18:01:44.329500       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0601 18:01:44.329420       1 shared_informer.go:247] Caches are synced for endpoint 
	I0601 18:01:44.343797       1 shared_informer.go:247] Caches are synced for HPA 
	I0601 18:01:44.344678       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 18:01:44.344684       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 18:01:44.429940       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0601 18:01:44.430250       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 18:01:44.444052       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 18:01:44.527768       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 18:01:44.527908       1 disruption.go:371] Sending events to api server.
	I0601 18:01:44.528066       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 18:01:44.528312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 18:01:44.830145       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 18:01:44.866567       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 18:01:44.866668       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 18:02:41.260761       1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
	I0601 18:02:41.531793       1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-9rpl2"
	I0601 18:04:01.891080       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0601 18:04:01.891230       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0601 18:04:21.089469       1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
	I0601 18:04:21.123807       1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-kpkgl"
	I0601 18:04:39.781810       1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
	I0601 18:04:39.785742       1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-6c9nb"
	
	* 
	* ==> kube-controller-manager [34197b3df9eb] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:157 +0x9e
	k8s.io/kubernetes/pkg/controller/serviceaccount.(*TokensController).syncSecret(0xc000e24a20)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/serviceaccount/tokens_controller.go:268 +0x53
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000ba9f00)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2a236e6070e5dd3a, {0x4d500a0, 0xc000fe8690}, 0x1, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xd7ddf9310e7bd0be, 0x0, 0x0, 0xde, 0xec8a4e1ac4b49010)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x697e1aa9b60ab056, 0x814df45ccd3b02de, 0x2589b81591c6c8b9)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
	created by k8s.io/kubernetes/pkg/controller/serviceaccount.(*TokensController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/serviceaccount/tokens_controller.go:180 +0x245
	
	goroutine 355 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:301 +0x77
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:300 +0xc8
	
	goroutine 356 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:708 +0x1c9
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:691 +0xcf
	
	* 
	* ==> kube-proxy [54088515f3b1] <==
	* E0601 17:58:56.221181       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0601 17:58:56.228048       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0601 17:58:56.231778       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0601 17:58:56.234793       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0601 17:58:56.237880       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0601 17:58:56.241003       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0601 17:58:56.525646       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 17:58:56.525776       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 17:58:56.525840       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 17:58:56.825135       1 server_others.go:206] "Using iptables Proxier"
	I0601 17:58:56.825269       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 17:58:56.825286       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 17:58:56.825334       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 17:58:56.826191       1 server.go:656] "Version info" version="v1.23.6"
	I0601 17:58:56.827267       1 config.go:317] "Starting service config controller"
	I0601 17:58:56.827424       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 17:58:56.827779       1 config.go:226] "Starting endpoint slice config controller"
	I0601 17:58:56.827939       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 17:58:56.928822       1 shared_informer.go:247] Caches are synced for service config 
	I0601 17:58:56.929061       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [9ee2f9d1ae9d] <==
	* E0601 18:01:13.043339       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0601 18:01:13.046778       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0601 18:01:13.049310       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0601 18:01:13.126889       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0601 18:01:13.133157       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0601 18:01:13.137532       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E0601 18:01:13.141559       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220601175654-3412": dial tcp 192.168.49.2:8441: connect: connection refused
	E0601 18:01:14.311819       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220601175654-3412": dial tcp 192.168.49.2:8441: connect: connection refused
	I0601 18:01:21.335488       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 18:01:21.335602       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 18:01:21.335634       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 18:01:21.735872       1 server_others.go:206] "Using iptables Proxier"
	I0601 18:01:21.736032       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 18:01:21.736046       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 18:01:21.736076       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 18:01:21.737101       1 server.go:656] "Version info" version="v1.23.6"
	I0601 18:01:21.741418       1 config.go:226] "Starting endpoint slice config controller"
	I0601 18:01:21.741861       1 config.go:317] "Starting service config controller"
	I0601 18:01:21.742573       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 18:01:21.742482       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 18:01:21.844649       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 18:01:21.844889       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1d64c64b4d63] <==
	* E0601 17:58:36.087333       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 17:58:36.092752       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 17:58:36.092949       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 17:58:36.117662       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 17:58:36.117770       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 17:58:36.121357       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 17:58:36.121463       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 17:58:36.138367       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 17:58:36.138484       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 17:58:36.317903       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 17:58:36.318014       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 17:58:36.418580       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 17:58:36.418720       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 17:58:36.528358       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 17:58:36.528508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 17:58:36.558910       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 17:58:36.559020       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 17:58:36.617994       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 17:58:36.618094       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 17:58:38.104563       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 17:58:38.104715       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0601 17:58:39.034025       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0601 18:01:08.534747       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 18:01:08.535086       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0601 18:01:08.535167       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [f8476e9b4b72] <==
	* I0601 18:01:14.928715       1 serving.go:348] Generated self-signed cert in-memory
	W0601 18:01:21.326767       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 18:01:21.326805       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0601 18:01:21.326825       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 18:01:21.326838       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 18:01:21.526318       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 18:01:21.529110       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 18:01:21.529720       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 18:01:21.529741       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0601 18:01:21.529777       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0601 18:01:21.629987       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0601 18:01:31.233478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0601 18:01:31.233680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0601 18:01:31.233744       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0601 18:01:31.233814       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0601 18:01:31.233883       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0601 18:01:31.328063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 17:57:49 UTC, end at Wed 2022-06-01 18:37:10 UTC. --
	Jun 01 18:04:32 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:32.158468    6101 reconciler.go:300] "Volume detached for volume \"kube-api-access-5c74l\" (UniqueName: \"kubernetes.io/projected/3a5da3c5-8277-4bc6-b783-7051fd58f871-kube-api-access-5c74l\") on node \"functional-20220601175654-3412\" DevicePath \"\""
	Jun 01 18:04:32 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:32.158597    6101 reconciler.go:300] "Volume detached for volume \"pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9\" (UniqueName: \"kubernetes.io/host-path/3a5da3c5-8277-4bc6-b783-7051fd58f871-pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9\") on node \"functional-20220601175654-3412\" DevicePath \"\""
	Jun 01 18:04:32 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:32.959523    6101 scope.go:110] "RemoveContainer" containerID="428f2721d941ee9d29bee164b4a1e72f74826bea609f3fd8f3b28943beaba0f5"
	Jun 01 18:04:33 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:33.607897    6101 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 18:04:33 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:33.837277    6101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9kw\" (UniqueName: \"kubernetes.io/projected/19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd-kube-api-access-hw9kw\") pod \"sp-pod\" (UID: \"19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd\") " pod="default/sp-pod"
	Jun 01 18:04:33 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:33.837482    6101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9\" (UniqueName: \"kubernetes.io/host-path/19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd-pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9\") pod \"sp-pod\" (UID: \"19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd\") " pod="default/sp-pod"
	Jun 01 18:04:33 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:33.974227    6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-kpkgl through plugin: invalid network status for"
	Jun 01 18:04:34 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:34.535202    6101 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3a5da3c5-8277-4bc6-b783-7051fd58f871 path="/var/lib/kubelet/pods/3a5da3c5-8277-4bc6-b783-7051fd58f871/volumes"
	Jun 01 18:04:34 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:34.965148    6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Jun 01 18:04:35 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:35.009102    6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Jun 01 18:04:36 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:36.026458    6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Jun 01 18:04:37 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:37.232507    6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Jun 01 18:04:39 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:39.797741    6101 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 18:04:39 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:39.891576    6101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vcx9\" (UniqueName: \"kubernetes.io/projected/6dff7187-bcdf-4179-b4f5-61f1663b106c-kube-api-access-2vcx9\") pod \"hello-node-54fbb85-6c9nb\" (UID: \"6dff7187-bcdf-4179-b4f5-61f1663b106c\") " pod="default/hello-node-54fbb85-6c9nb"
	Jun 01 18:04:40 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:40.936762    6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-6c9nb through plugin: invalid network status for"
	Jun 01 18:04:40 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:40.936948    6101 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2af19e9c1aff9c35f54f45dbee58eb8b57601e3f4e136cca4a9d4f5b1d525992"
	Jun 01 18:04:41 functional-20220601175654-3412 kubelet[6101]: I0601 18:04:41.953808    6101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-6c9nb through plugin: invalid network status for"
	Jun 01 18:05:21 functional-20220601175654-3412 kubelet[6101]: E0601 18:05:21.026121    6101 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/5dd22e46ce26d76ce53ed70eb816f68071a75adce2dd558df7d59cf62c541102/diff" to get inode usage: stat /var/lib/docker/overlay2/5dd22e46ce26d76ce53ed70eb816f68071a75adce2dd558df7d59cf62c541102/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/e2ab857ff931d067ffedd71ba632df14981b57ce80d29351a49991e38c08c79c" to get inode usage: stat /var/lib/docker/containers/e2ab857ff931d067ffedd71ba632df14981b57ce80d29351a49991e38c08c79c: no such file or directory
	Jun 01 18:06:20 functional-20220601175654-3412 kubelet[6101]: W0601 18:06:20.990761    6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 01 18:11:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:11:21.004791    6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 01 18:16:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:16:21.021117    6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 01 18:21:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:21:21.036896    6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 01 18:26:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:26:21.053184    6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 01 18:31:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:31:21.069505    6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 01 18:36:21 functional-20220601175654-3412 kubelet[6101]: W0601 18:36:21.083555    6101 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [249f3b6cebd0] <==
	* I0601 18:01:25.030891       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0601 18:01:25.041225       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [e059ac677a6c] <==
	* I0601 18:01:42.849361       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 18:01:42.952392       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 18:01:42.952590       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 18:02:00.541982       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 18:02:00.542191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31fc7c77-817a-49e9-98d6-e90848c88c5b", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220601175654-3412_99bde5bd-9ca0-41b5-8e78-2ef4bc83d1fd became leader
	I0601 18:02:00.542369       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220601175654-3412_99bde5bd-9ca0-41b5-8e78-2ef4bc83d1fd!
	I0601 18:02:00.643401       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220601175654-3412_99bde5bd-9ca0-41b5-8e78-2ef4bc83d1fd!
	I0601 18:04:01.891883       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0601 18:04:01.892161       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    45b79934-94bb-464c-ae51-897b26c8a5cb 463 0 2022-06-01 17:58:59 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-06-01 17:58:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  2cd87724-3bba-4cab-b1a2-a68496ffc9e9 785 0 2022-06-01 18:04:01 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-06-01 18:04:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-06-01 18:04:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0601 18:04:01.893035       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2cd87724-3bba-4cab-b1a2-a68496ffc9e9", APIVersion:"v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0601 18:04:01.893505       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9" provisioned
	I0601 18:04:01.893653       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0601 18:04:01.893667       1 volume_store.go:212] Trying to save persistentvolume "pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9"
	I0601 18:04:01.940106       1 volume_store.go:219] persistentvolume "pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9" saved
	I0601 18:04:01.940597       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2cd87724-3bba-4cab-b1a2-a68496ffc9e9", APIVersion:"v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2cd87724-3bba-4cab-b1a2-a68496ffc9e9
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220601175654-3412 -n functional-20220601175654-3412
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220601175654-3412 -n functional-20220601175654-3412: (6.3847037s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220601175654-3412 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220601175654-3412 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220601175654-3412 describe pod : exit status 1 (228.9343ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-20220601175654-3412 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (1958.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0601 18:05:11.833527    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:180: nginx-svc svc.status.loadBalancer.ingress never got an IP: timed out waiting for the condition
functional_test_tunnel_test.go:181: (dbg) Run:  kubectl --context functional-20220601175654-3412 get svc nginx-svc
functional_test_tunnel_test.go:185: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.108.73.94   <pending>     80:30752/TCP   3m17s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (10.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-20220601193434-3412
no_kubernetes_test.go:158: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p NoKubernetes-20220601193434-3412: exit status 1 (2.6425856s)
no_kubernetes_test.go:160: Failed to stop minikube "out/minikube-windows-amd64.exe stop -p NoKubernetes-20220601193434-3412" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220601193434-3412
helpers_test.go:231: (dbg) Done: docker inspect NoKubernetes-20220601193434-3412: (1.1403958s)
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20220601193434-3412:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba0ee0c9e42b6dc3a631038fc2c1549fec5ba98ffe0dc322ff7c05d3074574e7",
	        "Created": "2022-06-01T19:38:31.7204861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 140496,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:38:36.3548127Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ba0ee0c9e42b6dc3a631038fc2c1549fec5ba98ffe0dc322ff7c05d3074574e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba0ee0c9e42b6dc3a631038fc2c1549fec5ba98ffe0dc322ff7c05d3074574e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba0ee0c9e42b6dc3a631038fc2c1549fec5ba98ffe0dc322ff7c05d3074574e7/hosts",
	        "LogPath": "/var/lib/docker/containers/ba0ee0c9e42b6dc3a631038fc2c1549fec5ba98ffe0dc322ff7c05d3074574e7/ba0ee0c9e42b6dc3a631038fc2c1549fec5ba98ffe0dc322ff7c05d3074574e7-json.log",
	        "Name": "/NoKubernetes-20220601193434-3412",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-20220601193434-3412:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "NoKubernetes-20220601193434-3412",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 17091788800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 17091788800,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ae7fabcc9fbfbe05d23197b8b7a468a1b7c1a684cd1cca5c756a49cdeb41b478-init/diff:/var/lib/docker/overlay2/487b259deb346e6ca1e96023cfc1832638489725b45384e10e2c2effe462993c/diff:/var/lib/docker/overlay2/7830a7ee158a10893945c1b577efeb821d499cce7646d95d3c0cffb3ed372dca/diff:/var/lib/docker/overlay2/6fe83b204fd4124b69c52dc2b8620b75ac92764b58a8d1af6662ff240e517719/diff:/var/lib/docker/overlay2/6362560b46c9fab8d6514c8429f6275481f64020b6a76226333ec63d40b3509c/diff:/var/lib/docker/overlay2/b947dedac2c38cb9982c9b363e89606d658250ef2798320fdf3517f747048abd/diff:/var/lib/docker/overlay2/bc2839e6d5fd56592e9530bb7f1f81ed9502bdb7539e7f429732e9cf4cd3b17d/diff:/var/lib/docker/overlay2/1b3239e13a55e9fa626a7541842d884445974471039cc2d9226ad10f2b953536/diff:/var/lib/docker/overlay2/1884c2d81ecac540a3174fb86cefef2fd199eaa5c78d29afe6c63aff263f9584/diff:/var/lib/docker/overlay2/d1c361312180db411937b7786e1329e12f9ed7b9439d4574d6d9a237a8ef8a9e/diff:/var/lib/docker/overlay2/15125b
9e77872950f8bc77e7ec27026feb64d93311200f76586c570bbceb3810/diff:/var/lib/docker/overlay2/1778c10167346a2b58dd494e4689512b56050eed4b6df53a451f9aa373c3af35/diff:/var/lib/docker/overlay2/e45fa45d984d0fdd2eaca3b15c5e81abaa51b6b84fc051f20678d16cb6548a34/diff:/var/lib/docker/overlay2/54cea2bf354fab8e2c392a574195b06b919122ff6a1fb01b05f554ba43d9719e/diff:/var/lib/docker/overlay2/8667e3403c29f1a18aaababc226712f548d7dd623a4b9ac413520cf72955fb40/diff:/var/lib/docker/overlay2/5d20284be4fd7015d5b8eb6ae55b108a262e3c66cdaa9a8c4c23a6eb1726d4da/diff:/var/lib/docker/overlay2/d623242b443d7de7f75761cda756115d0f9df9f3b73144554928ceac06876d5b/diff:/var/lib/docker/overlay2/143dd7f527aa222e0eeaafe5e0182140c95e402aa335e7994b2aa7f1e6b6ba3c/diff:/var/lib/docker/overlay2/d690aea98cc6cb39fdd3f6660997b792085628157b14d576701adc72d3e6cf55/diff:/var/lib/docker/overlay2/2bb1d07709342e3bcb4feda7dc7d17fa9707986bf88cd7dc52eab255748276e0/diff:/var/lib/docker/overlay2/ea79e7f8097cf29c435b8a18ee6332b067ec4f7858b6eaabf897d2076a8deb3e/diff:/var/lib/d
ocker/overlay2/dab209c0bb58d228f914118438b0a79649c46857e6fcb416c0c556c049154f5d/diff:/var/lib/docker/overlay2/3bd421aaea3202bb8715cdd0f452aa411f20f2025b05d6a03811ebc7d0347896/diff:/var/lib/docker/overlay2/7dc112f5a6dc7809e579b2eaaeef54d3d5ee1326e9f35817dad641bc4e2c095a/diff:/var/lib/docker/overlay2/772b23d424621d351ce90f47e351441dc7fb224576441813bb86be52c0552022/diff:/var/lib/docker/overlay2/86ea33f163c6d58acb53a8e5bb27e1c131a6c915d7459ca03c90383b299fde58/diff:/var/lib/docker/overlay2/58deaba6fb571643d48dd090dd850eeb8fd343f41591580f4509fe61280e87de/diff:/var/lib/docker/overlay2/d8e5be8b94fe5858e777434bd7d360128719def82a5e7946fd4cb69aecab39fe/diff:/var/lib/docker/overlay2/a319e02b15899f20f933362a00fa40be829441edea2a0be36cc1e30b3417cf57/diff:/var/lib/docker/overlay2/b315efdf7f2b5f50f74664829533097f21ab8bda14478b76e9b5781079830b20/diff:/var/lib/docker/overlay2/bb96faec132eb5919c94fc772f61e63514308af6f72ec141483a94a85a77cc3b/diff:/var/lib/docker/overlay2/55dbff36528117ad96b3be9ee2396f7faee2f0b493773aa5abf5ba2b23a
5f728/diff:/var/lib/docker/overlay2/f11da52264a1f34c3b2180d2adcbcb7cc077c7f91611974bf0946d6bea248de5/diff:/var/lib/docker/overlay2/6ca19b0a8327fcd8f60b06c6b0f4519ff5f0f3eacd034e6c5c16ed45239f2238/diff:/var/lib/docker/overlay2/f86ed588a9cb5b359a174312bf8595e8e896ba3d8922b0bae1d8839518d24fb6/diff:/var/lib/docker/overlay2/0bf0e1906e62c903f71626646e2339b8e2c809d40948898d803dcaf0218ed0dd/diff:/var/lib/docker/overlay2/c8ff277ec5a9fa0db24ad64c7e0523b2b5a5c7d64f2148a0c9823fdd5bc60cad/diff:/var/lib/docker/overlay2/4cfbf9fc2a4a968773220ae74312f07a616afc80cbf9a4b68e2c2357c09ca009/diff:/var/lib/docker/overlay2/9a235e4b15bee3f10260f9356535723bf351a49b1f19af094d59a1439b7a9632/diff:/var/lib/docker/overlay2/9699d245a454ce1e21f1ac875a0910a63fb34d3d2870f163d8b6d258f33c2f4f/diff:/var/lib/docker/overlay2/6e093a9dfe282a2a53a4081251541e0c5b4176bb42d9c9bf908f19b1fdc577f5/diff:/var/lib/docker/overlay2/98036438a55a1794d298c11dc1eb0633e06ed433b84d24a3972e634a0b11deb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae7fabcc9fbfbe05d23197b8b7a468a1b7c1a684cd1cca5c756a49cdeb41b478/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae7fabcc9fbfbe05d23197b8b7a468a1b7c1a684cd1cca5c756a49cdeb41b478/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae7fabcc9fbfbe05d23197b8b7a468a1b7c1a684cd1cca5c756a49cdeb41b478/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-20220601193434-3412",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-20220601193434-3412/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-20220601193434-3412",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-20220601193434-3412",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-20220601193434-3412",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed0ecffed6ecb3b82e1b29eb7e44207454e29f93a4c00c1f1bf919bf9e063ee8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60475"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60476"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60474"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ed0ecffed6ec",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-20220601193434-3412": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ba0ee0c9e42b",
	                        "NoKubernetes-20220601193434-3412"
	                    ],
	                    "NetworkID": "e155ca3e933846d8b47ae2b8ffbc9628e1ad107b070629897d84af3442c3b516",
	                    "EndpointID": "27d85e60c607b91cbdd2a1d4735192ab29200fd48c7846ae392e6253558fa5d4",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220601193434-3412 -n NoKubernetes-20220601193434-3412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220601193434-3412 -n NoKubernetes-20220601193434-3412: exit status 6 (6.5119782s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 19:39:42.506493    8736 status.go:413] kubeconfig endpoint: extract IP: "NoKubernetes-20220601193434-3412" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-20220601193434-3412" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/Stop (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (621.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220601193451-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220601193451-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (10m20.5832309s)

                                                
                                                
-- stdout --
	* [cilium-20220601193451-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cilium-20220601193451-3412 in cluster cilium-20220601193451-3412
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 19:50:53.441646    6740 out.go:296] Setting OutFile to fd 1888 ...
	I0601 19:50:53.499538    6740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:50:53.499538    6740 out.go:309] Setting ErrFile to fd 1892...
	I0601 19:50:53.499538    6740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:50:53.513103    6740 out.go:303] Setting JSON to false
	I0601 19:50:53.516187    6740 start.go:115] hostinfo: {"hostname":"minikube4","uptime":73168,"bootTime":1654039885,"procs":169,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 19:50:53.516383    6740 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 19:50:53.528227    6740 out.go:177] * [cilium-20220601193451-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 19:50:53.531925    6740 notify.go:193] Checking for updates...
	I0601 19:50:53.534445    6740 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 19:50:53.539482    6740 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 19:50:53.544071    6740 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 19:50:53.546209    6740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 19:50:53.551750    6740 config.go:178] Loaded profile config "auto-20220601193434-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:50:53.552490    6740 config.go:178] Loaded profile config "pause-20220601194928-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:50:53.553174    6740 config.go:178] Loaded profile config "running-upgrade-20220601194733-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 19:50:53.553174    6740 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 19:50:56.366202    6740 docker.go:137] docker version: linux-20.10.14
	I0601 19:50:56.374219    6740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:50:58.552536    6740 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1772138s)
	I0601 19:50:58.552536    6740 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:85 OomKillDisable:true NGoroutines:61 SystemTime:2022-06-01 19:50:57.4423427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:50:58.556538    6740 out.go:177] * Using the docker driver based on user configuration
	I0601 19:50:58.559544    6740 start.go:284] selected driver: docker
	I0601 19:50:58.559544    6740 start.go:806] validating driver "docker" against <nil>
	I0601 19:50:58.559544    6740 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 19:50:58.700795    6740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:51:01.208732    6740 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5076493s)
	I0601 19:51:01.209074    6740 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:61 SystemTime:2022-06-01 19:50:59.9155993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:51:01.209074    6740 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 19:51:01.209969    6740 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 19:51:01.213318    6740 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 19:51:01.215101    6740 cni.go:95] Creating CNI manager for "cilium"
	I0601 19:51:01.215101    6740 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0601 19:51:01.215101    6740 start_flags.go:306] config:
	{Name:cilium-20220601193451-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220601193451-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 19:51:01.218094    6740 out.go:177] * Starting control plane node cilium-20220601193451-3412 in cluster cilium-20220601193451-3412
	I0601 19:51:01.221121    6740 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 19:51:01.223102    6740 out.go:177] * Pulling base image ...
	I0601 19:51:01.226095    6740 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:51:01.226095    6740 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 19:51:01.226095    6740 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 19:51:01.226095    6740 cache.go:57] Caching tarball of preloaded images
	I0601 19:51:01.227102    6740 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 19:51:01.227102    6740 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 19:51:01.227102    6740 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\config.json ...
	I0601 19:51:01.227102    6740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\config.json: {Name:mkc17bba3203c1d09aa3cbd37f0c9449ef7ac225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:51:02.405454    6740 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 19:51:02.405454    6740 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 19:51:02.405454    6740 cache.go:206] Successfully downloaded all kic artifacts
	I0601 19:51:02.405454    6740 start.go:352] acquiring machines lock for cilium-20220601193451-3412: {Name:mk1fca0f605a3c528c08a59a2073b521e263a4e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 19:51:02.405454    6740 start.go:356] acquired machines lock for "cilium-20220601193451-3412" in 0s
	I0601 19:51:02.406148    6740 start.go:91] Provisioning new machine with config: &{Name:cilium-20220601193451-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220601193451-3412 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 19:51:02.406221    6740 start.go:131] createHost starting for "" (driver="docker")
	I0601 19:51:02.415212    6740 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 19:51:02.415212    6740 start.go:165] libmachine.API.Create for "cilium-20220601193451-3412" (driver="docker")
	I0601 19:51:02.415747    6740 client.go:168] LocalClient.Create starting
	I0601 19:51:02.416791    6740 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0601 19:51:02.416863    6740 main.go:134] libmachine: Decoding PEM data...
	I0601 19:51:02.416863    6740 main.go:134] libmachine: Parsing certificate...
	I0601 19:51:02.416863    6740 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0601 19:51:02.417488    6740 main.go:134] libmachine: Decoding PEM data...
	I0601 19:51:02.417604    6740 main.go:134] libmachine: Parsing certificate...
	I0601 19:51:02.425841    6740 cli_runner.go:164] Run: docker network inspect cilium-20220601193451-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 19:51:03.613004    6740 cli_runner.go:211] docker network inspect cilium-20220601193451-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 19:51:03.613004    6740 cli_runner.go:217] Completed: docker network inspect cilium-20220601193451-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1871017s)
	I0601 19:51:03.621037    6740 network_create.go:272] running [docker network inspect cilium-20220601193451-3412] to gather additional debugging logs...
	I0601 19:51:03.621037    6740 cli_runner.go:164] Run: docker network inspect cilium-20220601193451-3412
	W0601 19:51:04.933124    6740 cli_runner.go:211] docker network inspect cilium-20220601193451-3412 returned with exit code 1
	I0601 19:51:04.933124    6740 cli_runner.go:217] Completed: docker network inspect cilium-20220601193451-3412: (1.3120192s)
	I0601 19:51:04.933124    6740 network_create.go:275] error running [docker network inspect cilium-20220601193451-3412]: docker network inspect cilium-20220601193451-3412: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220601193451-3412
	I0601 19:51:04.933124    6740 network_create.go:277] output of [docker network inspect cilium-20220601193451-3412]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220601193451-3412
	
	** /stderr **
	I0601 19:51:04.940127    6740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 19:51:06.141322    6740 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2011331s)
	I0601 19:51:06.167181    6740 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060b8b8] misses:0}
	I0601 19:51:06.167181    6740 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:51:06.167181    6740 network_create.go:115] attempt to create docker network cilium-20220601193451-3412 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 19:51:06.179814    6740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601193451-3412
	W0601 19:51:07.445307    6740 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601193451-3412 returned with exit code 1
	I0601 19:51:07.445307    6740 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601193451-3412: (1.2654276s)
	W0601 19:51:07.445307    6740 network_create.go:107] failed to create docker network cilium-20220601193451-3412 192.168.49.0/24, will retry: subnet is taken
	I0601 19:51:07.463374    6740 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060b8b8] amended:false}} dirty:map[] misses:0}
	I0601 19:51:07.464351    6740 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:51:07.488327    6740 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060b8b8] amended:true}} dirty:map[192.168.49.0:0xc00060b8b8 192.168.58.0:0xc00058c390] misses:0}
	I0601 19:51:07.488327    6740 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:51:07.488327    6740 network_create.go:115] attempt to create docker network cilium-20220601193451-3412 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 19:51:07.505328    6740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601193451-3412
	W0601 19:51:08.723260    6740 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601193451-3412 returned with exit code 1
	I0601 19:51:08.723260    6740 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601193451-3412: (1.2178681s)
	W0601 19:51:08.723260    6740 network_create.go:107] failed to create docker network cilium-20220601193451-3412 192.168.58.0/24, will retry: subnet is taken
	I0601 19:51:08.746243    6740 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060b8b8] amended:true}} dirty:map[192.168.49.0:0xc00060b8b8 192.168.58.0:0xc00058c390] misses:1}
	I0601 19:51:08.746243    6740 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:51:08.769167    6740 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060b8b8] amended:true}} dirty:map[192.168.49.0:0xc00060b8b8 192.168.58.0:0xc00058c390 192.168.67.0:0xc00060a248] misses:1}
	I0601 19:51:08.769167    6740 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:51:08.769167    6740 network_create.go:115] attempt to create docker network cilium-20220601193451-3412 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0601 19:51:08.785179    6740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601193451-3412
	I0601 19:51:10.137557    6740 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601193451-3412: (1.3523082s)
	I0601 19:51:10.137557    6740 network_create.go:99] docker network cilium-20220601193451-3412 192.168.67.0/24 created
	I0601 19:51:10.137557    6740 kic.go:106] calculated static IP "192.168.67.2" for the "cilium-20220601193451-3412" container
	I0601 19:51:10.150558    6740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 19:51:11.424267    6740 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2736422s)
	I0601 19:51:11.431142    6740 cli_runner.go:164] Run: docker volume create cilium-20220601193451-3412 --label name.minikube.sigs.k8s.io=cilium-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true
	I0601 19:51:12.615312    6740 cli_runner.go:217] Completed: docker volume create cilium-20220601193451-3412 --label name.minikube.sigs.k8s.io=cilium-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true: (1.1841079s)
	I0601 19:51:12.615312    6740 oci.go:103] Successfully created a docker volume cilium-20220601193451-3412
	I0601 19:51:12.622322    6740 cli_runner.go:164] Run: docker run --rm --name cilium-20220601193451-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220601193451-3412 --entrypoint /usr/bin/test -v cilium-20220601193451-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 19:51:15.479885    6740 cli_runner.go:217] Completed: docker run --rm --name cilium-20220601193451-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220601193451-3412 --entrypoint /usr/bin/test -v cilium-20220601193451-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib: (2.8564192s)
	I0601 19:51:15.479885    6740 oci.go:107] Successfully prepared a docker volume cilium-20220601193451-3412
	I0601 19:51:15.479885    6740 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:51:15.479885    6740 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 19:51:15.486862    6740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220601193451-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 19:51:37.596445    6740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220601193451-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (22.1084335s)
	I0601 19:51:37.596445    6740 kic.go:188] duration metric: took 22.115411 seconds to extract preloaded images to volume
	I0601 19:51:37.603452    6740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:51:39.865749    6740 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2621799s)
	I0601 19:51:39.865749    6740 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:82 OomKillDisable:true NGoroutines:61 SystemTime:2022-06-01 19:51:38.724201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:51:39.875627    6740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 19:51:42.243641    6740 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.3678907s)
	I0601 19:51:42.249642    6740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220601193451-3412 --name cilium-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220601193451-3412 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220601193451-3412 --network cilium-20220601193451-3412 --ip 192.168.67.2 --volume cilium-20220601193451-3412:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 19:51:44.895385    6740 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220601193451-3412 --name cilium-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220601193451-3412 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220601193451-3412 --network cilium-20220601193451-3412 --ip 192.168.67.2 --volume cilium-20220601193451-3412:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a: (2.6456055s)
	I0601 19:51:44.907348    6740 cli_runner.go:164] Run: docker container inspect cilium-20220601193451-3412 --format={{.State.Running}}
	I0601 19:51:46.198164    6740 cli_runner.go:217] Completed: docker container inspect cilium-20220601193451-3412 --format={{.State.Running}}: (1.2906154s)
	I0601 19:51:46.209811    6740 cli_runner.go:164] Run: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}
	I0601 19:51:47.605411    6740 cli_runner.go:217] Completed: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}: (1.3946523s)
	I0601 19:51:47.624715    6740 cli_runner.go:164] Run: docker exec cilium-20220601193451-3412 stat /var/lib/dpkg/alternatives/iptables
	I0601 19:51:49.443981    6740 cli_runner.go:217] Completed: docker exec cilium-20220601193451-3412 stat /var/lib/dpkg/alternatives/iptables: (1.8191718s)
	I0601 19:51:49.443981    6740 oci.go:247] the created container "cilium-20220601193451-3412" has a running status.
	I0601 19:51:49.443981    6740 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa...
	I0601 19:51:49.847425    6740 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 19:51:51.409178    6740 cli_runner.go:164] Run: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}
	I0601 19:51:52.745636    6740 cli_runner.go:217] Completed: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}: (1.3363889s)
	I0601 19:51:52.760646    6740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 19:51:52.761637    6740 kic_runner.go:114] Args: [docker exec --privileged cilium-20220601193451-3412 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 19:51:54.256476    6740 kic_runner.go:123] Done: [docker exec --privileged cilium-20220601193451-3412 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.4947619s)
	I0601 19:51:54.260492    6740 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa...
	I0601 19:51:54.944248    6740 cli_runner.go:164] Run: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}
	I0601 19:51:56.280478    6740 cli_runner.go:217] Completed: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}: (1.3361094s)
	I0601 19:51:56.280582    6740 machine.go:88] provisioning docker machine ...
	I0601 19:51:56.280696    6740 ubuntu.go:169] provisioning hostname "cilium-20220601193451-3412"
	I0601 19:51:56.294994    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:51:57.631586    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.3362061s)
	I0601 19:51:57.635249    6740 main.go:134] libmachine: Using SSH client type: native
	I0601 19:51:57.636250    6740 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61146 <nil> <nil>}
	I0601 19:51:57.636326    6740 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-20220601193451-3412 && echo "cilium-20220601193451-3412" | sudo tee /etc/hostname
	I0601 19:51:57.899861    6740 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-20220601193451-3412
	
	I0601 19:51:57.907867    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:51:59.256532    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.3485465s)
	I0601 19:51:59.260798    6740 main.go:134] libmachine: Using SSH client type: native
	I0601 19:51:59.261411    6740 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61146 <nil> <nil>}
	I0601 19:51:59.261411    6740 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20220601193451-3412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20220601193451-3412/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20220601193451-3412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 19:51:59.437777    6740 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 19:51:59.437777    6740 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0601 19:51:59.437777    6740 ubuntu.go:177] setting up certificates
	I0601 19:51:59.437777    6740 provision.go:83] configureAuth start
	I0601 19:51:59.450069    6740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220601193451-3412
	I0601 19:52:00.767913    6740 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220601193451-3412: (1.3175613s)
	I0601 19:52:00.767913    6740 provision.go:138] copyHostCerts
	I0601 19:52:00.767913    6740 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0601 19:52:00.767913    6740 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0601 19:52:00.769103    6740 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0601 19:52:00.769913    6740 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0601 19:52:00.769913    6740 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0601 19:52:00.769913    6740 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0601 19:52:00.771905    6740 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0601 19:52:00.771905    6740 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0601 19:52:00.772921    6740 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I0601 19:52:00.773919    6740 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-20220601193451-3412 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20220601193451-3412]
	I0601 19:52:00.884934    6740 provision.go:172] copyRemoteCerts
	I0601 19:52:00.898923    6740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 19:52:00.906918    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:02.118534    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.2115535s)
	I0601 19:52:02.118534    6740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61146 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa Username:docker}
	I0601 19:52:02.232942    6740 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3339499s)
	I0601 19:52:02.233810    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 19:52:02.281523    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 19:52:02.343962    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0601 19:52:02.401118    6740 provision.go:86] duration metric: configureAuth took 2.9631877s
	I0601 19:52:02.401118    6740 ubuntu.go:193] setting minikube options for container-runtime
	I0601 19:52:02.401668    6740 config.go:178] Loaded profile config "cilium-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:52:02.412612    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:03.638523    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.2258471s)
	I0601 19:52:03.641519    6740 main.go:134] libmachine: Using SSH client type: native
	I0601 19:52:03.642494    6740 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61146 <nil> <nil>}
	I0601 19:52:03.642494    6740 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 19:52:03.826543    6740 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 19:52:03.826543    6740 ubuntu.go:71] root file system type: overlay
	I0601 19:52:03.826543    6740 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 19:52:03.838429    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:05.066382    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.2278658s)
	I0601 19:52:05.072801    6740 main.go:134] libmachine: Using SSH client type: native
	I0601 19:52:05.073260    6740 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61146 <nil> <nil>}
	I0601 19:52:05.073260    6740 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 19:52:05.340668    6740 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 19:52:05.350887    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:06.616062    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.2645423s)
	I0601 19:52:06.622384    6740 main.go:134] libmachine: Using SSH client type: native
	I0601 19:52:06.623538    6740 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61146 <nil> <nil>}
	I0601 19:52:06.623612    6740 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 19:52:08.141440    6740 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 19:52:05.317705000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 19:52:08.141440    6740 machine.go:91] provisioned docker machine in 11.8602413s
	I0601 19:52:08.141440    6740 client.go:171] LocalClient.Create took 1m5.7221947s
	I0601 19:52:08.141976    6740 start.go:173] duration metric: libmachine.API.Create for "cilium-20220601193451-3412" took 1m5.7228121s
	I0601 19:52:08.141976    6740 start.go:306] post-start starting for "cilium-20220601193451-3412" (driver="docker")
	I0601 19:52:08.141976    6740 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 19:52:08.160218    6740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 19:52:08.175118    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:09.359886    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.1846654s)
	I0601 19:52:09.359886    6740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61146 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa Username:docker}
	I0601 19:52:09.516153    6740 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3558637s)
	I0601 19:52:09.526163    6740 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 19:52:09.536163    6740 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 19:52:09.536163    6740 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 19:52:09.536163    6740 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 19:52:09.536163    6740 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 19:52:09.536163    6740 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0601 19:52:09.536163    6740 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0601 19:52:09.537161    6740 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem -> 34122.pem in /etc/ssl/certs
	I0601 19:52:09.548163    6740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 19:52:09.576810    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem --> /etc/ssl/certs/34122.pem (1708 bytes)
	I0601 19:52:09.642661    6740 start.go:309] post-start completed in 1.5006065s
	I0601 19:52:09.657303    6740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220601193451-3412
	I0601 19:52:10.897070    6740 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220601193451-3412: (1.2397024s)
	I0601 19:52:10.897530    6740 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\config.json ...
	I0601 19:52:10.909930    6740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 19:52:10.917564    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:12.135575    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.2179474s)
	I0601 19:52:12.135575    6740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61146 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa Username:docker}
	I0601 19:52:12.260216    6740 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3501837s)
	I0601 19:52:12.271649    6740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 19:52:12.287853    6740 start.go:134] duration metric: createHost completed in 1m9.8780006s
	I0601 19:52:12.287853    6740 start.go:81] releasing machines lock for "cilium-20220601193451-3412", held for 1m9.8787674s
	I0601 19:52:12.299466    6740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220601193451-3412
	I0601 19:52:13.516084    6740 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220601193451-3412: (1.2164306s)
	I0601 19:52:13.521319    6740 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 19:52:13.529221    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:13.530220    6740 ssh_runner.go:195] Run: systemctl --version
	I0601 19:52:13.537191    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:14.758641    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.2213867s)
	I0601 19:52:14.758641    6740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61146 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa Username:docker}
	I0601 19:52:14.837643    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.308354s)
	I0601 19:52:14.837643    6740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61146 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa Username:docker}
	I0601 19:52:14.893008    6740 ssh_runner.go:235] Completed: systemctl --version: (1.3625754s)
	I0601 19:52:14.917790    6740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 19:52:15.010379    6740 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.4889821s)
	I0601 19:52:15.025383    6740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 19:52:15.054733    6740 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 19:52:15.073796    6740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 19:52:15.105786    6740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 19:52:15.148020    6740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 19:52:15.323339    6740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 19:52:15.631719    6740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 19:52:15.685369    6740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 19:52:15.919812    6740 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 19:52:15.960471    6740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 19:52:16.075477    6740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 19:52:16.200411    6740 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 19:52:16.207418    6740 cli_runner.go:164] Run: docker exec -t cilium-20220601193451-3412 dig +short host.docker.internal
	I0601 19:52:17.609170    6740 cli_runner.go:217] Completed: docker exec -t cilium-20220601193451-3412 dig +short host.docker.internal: (1.4016797s)
	I0601 19:52:17.609170    6740 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 19:52:17.619190    6740 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 19:52:17.635443    6740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 19:52:17.687930    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:52:18.797742    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.1095616s)
	I0601 19:52:18.797816    6740 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:52:18.807805    6740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 19:52:18.893846    6740 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 19:52:18.893966    6740 docker.go:541] Images already preloaded, skipping extraction
	I0601 19:52:18.907840    6740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 19:52:18.989598    6740 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 19:52:18.989598    6740 cache_images.go:84] Images are preloaded, skipping loading
	I0601 19:52:18.996596    6740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 19:52:19.210219    6740 cni.go:95] Creating CNI manager for "cilium"
	I0601 19:52:19.210304    6740 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 19:52:19.210304    6740 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20220601193451-3412 NodeName:cilium-20220601193451-3412 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 19:52:19.210304    6740 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cilium-20220601193451-3412"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 19:52:19.210304    6740 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cilium-20220601193451-3412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:cilium-20220601193451-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0601 19:52:19.224233    6740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 19:52:19.245842    6740 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 19:52:19.254842    6740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 19:52:19.277919    6740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0601 19:52:19.325187    6740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 19:52:19.368830    6740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0601 19:52:19.422066    6740 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0601 19:52:19.431069    6740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 19:52:19.454031    6740 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412 for IP: 192.168.67.2
	I0601 19:52:19.454031    6740 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0601 19:52:19.454031    6740 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0601 19:52:19.455034    6740 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\client.key
	I0601 19:52:19.455034    6740 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\client.crt with IP's: []
	I0601 19:52:19.568526    6740 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\client.crt ...
	I0601 19:52:19.568526    6740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\client.crt: {Name:mk7acde258faa80e12403df6a9b0c0c4c9ed789f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:52:19.570482    6740 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\client.key ...
	I0601 19:52:19.570482    6740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\client.key: {Name:mk09df2b8126c0ccbf0388e4711100f58eed3519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:52:19.572489    6740 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.key.c7fa3a9e
	I0601 19:52:19.572489    6740 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 19:52:19.928134    6740 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.crt.c7fa3a9e ...
	I0601 19:52:19.928134    6740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.crt.c7fa3a9e: {Name:mk82eb821787cfffad44787b29d6ed32b81b4df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:52:19.929241    6740 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.key.c7fa3a9e ...
	I0601 19:52:19.929241    6740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.key.c7fa3a9e: {Name:mkbb4267824edf006aa3c1b997e201a3ccf8d12f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:52:19.930064    6740 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.crt
	I0601 19:52:19.941979    6740 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.key
	I0601 19:52:19.944549    6740 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\proxy-client.key
	I0601 19:52:19.945358    6740 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\proxy-client.crt with IP's: []
	I0601 19:52:20.356175    6740 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\proxy-client.crt ...
	I0601 19:52:20.356175    6740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\proxy-client.crt: {Name:mkeff36b574877f39a9a6da0102f5f0447c11594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:52:20.357210    6740 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\proxy-client.key ...
	I0601 19:52:20.357210    6740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\proxy-client.key: {Name:mk884edc8876c6182f3889ecd9637f3670bb935b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:52:20.366183    6740 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412.pem (1338 bytes)
	W0601 19:52:20.366183    6740 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412_empty.pem, impossibly tiny 0 bytes
	I0601 19:52:20.367169    6740 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0601 19:52:20.367169    6740 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0601 19:52:20.367169    6740 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0601 19:52:20.367169    6740 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0601 19:52:20.368181    6740 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem (1708 bytes)
	I0601 19:52:20.369177    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 19:52:20.437883    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 19:52:20.495279    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 19:52:20.555740    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220601193451-3412\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 19:52:20.609683    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 19:52:20.658047    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 19:52:20.726081    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 19:52:20.775413    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 19:52:20.830236    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem --> /usr/share/ca-certificates/34122.pem (1708 bytes)
	I0601 19:52:20.892223    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 19:52:20.963828    6740 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412.pem --> /usr/share/ca-certificates/3412.pem (1338 bytes)
	I0601 19:52:21.024464    6740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 19:52:21.086865    6740 ssh_runner.go:195] Run: openssl version
	I0601 19:52:21.124593    6740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34122.pem && ln -fs /usr/share/ca-certificates/34122.pem /etc/ssl/certs/34122.pem"
	I0601 19:52:21.155111    6740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34122.pem
	I0601 19:52:21.169130    6740 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 17:56 /usr/share/ca-certificates/34122.pem
	I0601 19:52:21.182130    6740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34122.pem
	I0601 19:52:21.209126    6740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34122.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 19:52:21.244391    6740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 19:52:21.277666    6740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:52:21.295472    6740 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:46 /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:52:21.318644    6740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:52:21.346172    6740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 19:52:21.385696    6740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3412.pem && ln -fs /usr/share/ca-certificates/3412.pem /etc/ssl/certs/3412.pem"
	I0601 19:52:21.422676    6740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3412.pem
	I0601 19:52:21.434673    6740 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 17:56 /usr/share/ca-certificates/3412.pem
	I0601 19:52:21.447382    6740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3412.pem
	I0601 19:52:21.475815    6740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3412.pem /etc/ssl/certs/51391683.0"
	I0601 19:52:21.499123    6740 kubeadm.go:395] StartCluster: {Name:cilium-20220601193451-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220601193451-3412 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false}
	I0601 19:52:21.506144    6740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 19:52:21.583898    6740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 19:52:21.631569    6740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 19:52:21.672213    6740 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 19:52:21.686338    6740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 19:52:21.713356    6740 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 19:52:21.713356    6740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 19:52:51.394103    6740 out.go:204]   - Generating certificates and keys ...
	I0601 19:52:51.400130    6740 out.go:204]   - Booting up control plane ...
	I0601 19:52:51.411680    6740 out.go:204]   - Configuring RBAC rules ...
	I0601 19:52:51.416687    6740 cni.go:95] Creating CNI manager for "cilium"
	I0601 19:52:51.420678    6740 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0601 19:52:51.433683    6740 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0601 19:52:51.590864    6740 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I0601 19:52:51.591941    6740 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I0601 19:52:51.591941    6740 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0601 19:52:51.591941    6740 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 19:52:51.591941    6740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I0601 19:52:51.708057    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 19:52:55.677145    6740 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.9688823s)
	I0601 19:52:55.677145    6740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 19:52:55.698683    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:55.698683    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=cilium-20220601193451-3412 minikube.k8s.io/updated_at=2022_06_01T19_52_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:55.703244    6740 ops.go:34] apiserver oom_adj: -16
	I0601 19:52:55.994243    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:56.713689    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:57.211106    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:57.718281    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:58.220934    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:58.719541    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:59.212230    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:52:59.723487    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:53:00.225255    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:53:00.724263    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:53:01.208656    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:53:01.717138    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:53:02.210253    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:53:03.209765    6740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:53:10.280931    6740 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (7.0697998s)
	I0601 19:53:10.280931    6740 kubeadm.go:1045] duration metric: took 14.6030277s to wait for elevateKubeSystemPrivileges.
	I0601 19:53:10.280931    6740 kubeadm.go:397] StartCluster complete in 48.7792736s
	I0601 19:53:10.280931    6740 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:53:10.281335    6740 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 19:53:10.289891    6740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:53:11.079793    6740 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20220601193451-3412" rescaled to 1
	I0601 19:53:11.080408    6740 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 19:53:11.080408    6740 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 19:53:11.080408    6740 addons.go:65] Setting storage-provisioner=true in profile "cilium-20220601193451-3412"
	I0601 19:53:11.080408    6740 addons.go:65] Setting default-storageclass=true in profile "cilium-20220601193451-3412"
	I0601 19:53:11.085396    6740 addons.go:153] Setting addon storage-provisioner=true in "cilium-20220601193451-3412"
	I0601 19:53:11.080408    6740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 19:53:11.085396    6740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20220601193451-3412"
	I0601 19:53:11.081399    6740 config.go:178] Loaded profile config "cilium-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:53:11.085396    6740 out.go:177] * Verifying Kubernetes components...
	W0601 19:53:11.085396    6740 addons.go:165] addon storage-provisioner should already be in state true
	I0601 19:53:11.085396    6740 host.go:66] Checking if "cilium-20220601193451-3412" exists ...
	I0601 19:53:11.110407    6740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 19:53:11.115394    6740 cli_runner.go:164] Run: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}
	I0601 19:53:11.121400    6740 cli_runner.go:164] Run: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}
	I0601 19:53:11.484960    6740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 19:53:11.504288    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:53:12.627344    6740 cli_runner.go:217] Completed: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}: (1.5118711s)
	I0601 19:53:12.659434    6740 cli_runner.go:217] Completed: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}: (1.5379533s)
	I0601 19:53:12.662657    6740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 19:53:12.667618    6740 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 19:53:12.667618    6740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 19:53:12.678880    6740 addons.go:153] Setting addon default-storageclass=true in "cilium-20220601193451-3412"
	W0601 19:53:12.678880    6740 addons.go:165] addon default-storageclass should already be in state true
	I0601 19:53:12.678880    6740 host.go:66] Checking if "cilium-20220601193451-3412" exists ...
	I0601 19:53:12.681867    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:53:12.715875    6740 cli_runner.go:164] Run: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}
	I0601 19:53:12.996016    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.4916507s)
	I0601 19:53:13.000015    6740 node_ready.go:35] waiting up to 5m0s for node "cilium-20220601193451-3412" to be "Ready" ...
	I0601 19:53:13.085470    6740 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.6002093s)
	I0601 19:53:13.085470    6740 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 19:53:13.191888    6740 node_ready.go:49] node "cilium-20220601193451-3412" has status "Ready":"True"
	I0601 19:53:13.191888    6740 node_ready.go:38] duration metric: took 191.8624ms waiting for node "cilium-20220601193451-3412" to be "Ready" ...
	I0601 19:53:13.191888    6740 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 19:53:13.307629    6740 pod_ready.go:78] waiting up to 5m0s for pod "cilium-72kgq" in "kube-system" namespace to be "Ready" ...
	I0601 19:53:14.154993    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.4728794s)
	I0601 19:53:14.155074    6740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61146 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa Username:docker}
	I0601 19:53:14.186005    6740 cli_runner.go:217] Completed: docker container inspect cilium-20220601193451-3412 --format={{.State.Status}}: (1.4700539s)
	I0601 19:53:14.186317    6740 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 19:53:14.186392    6740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 19:53:14.204398    6740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412
	I0601 19:53:14.806258    6740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 19:53:15.473377    6740 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601193451-3412: (1.2689136s)
	I0601 19:53:15.473377    6740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61146 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220601193451-3412\id_rsa Username:docker}
	I0601 19:53:15.624238    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:15.693210    6740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 19:53:19.551095    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:21.387525    6740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.6940197s)
	I0601 19:53:21.387525    6740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.5809261s)
	I0601 19:53:21.394514    6740 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0601 19:53:21.398506    6740 addons.go:417] enableAddons completed in 10.3175628s
	I0601 19:53:21.803101    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:24.281895    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:26.285103    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:28.782581    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:30.784224    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:33.133428    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:35.191839    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:37.279346    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:39.791575    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:42.195862    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:45.125333    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:47.193972    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:49.697143    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:51.897760    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:54.202425    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:56.698306    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:59.033254    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:02.046039    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:04.189144    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:06.723252    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:09.198168    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:11.200634    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:14.479862    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:16.698256    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:18.752296    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:21.206981    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:23.640996    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:26.181054    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:28.186283    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:30.637378    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:32.644995    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:35.152650    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:37.646304    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:40.139079    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:42.145653    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:44.145918    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:46.641837    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:49.133409    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:51.133515    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:54.388494    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:56.643921    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:58.651859    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:01.135850    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:03.637870    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:06.141409    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:08.144843    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:10.639874    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:12.703694    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:15.148553    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:17.652407    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:20.133869    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:22.145313    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:24.640927    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:26.644704    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:29.141351    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:31.161748    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:33.180032    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:35.658565    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:38.148093    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:40.154068    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:42.339721    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:44.643831    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:46.663218    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:49.148866    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:51.154933    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:53.358701    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:55.651665    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:58.140452    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:00.142389    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:02.699219    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:05.148705    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:07.152851    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:09.642654    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:11.648443    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:13.780072    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:16.146590    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:18.248564    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:20.634905    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:23.137556    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:25.145866    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:27.148419    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:29.635423    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:31.636979    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:33.642954    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:35.643699    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:37.654838    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:40.156687    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:42.645885    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:45.136814    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:47.149331    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:49.157783    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:51.641239    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:53.644187    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:56.153673    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:58.640454    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:00.642794    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:02.690451    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:04.719428    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:07.146905    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:09.650975    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:11.651186    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:13.646907    6740 pod_ready.go:81] duration metric: took 4m0.3268596s waiting for pod "cilium-72kgq" in "kube-system" namespace to be "Ready" ...
	E0601 19:57:13.646907    6740 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0601 19:57:13.646907    6740 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-78f49c47f-rpgvw" in "kube-system" namespace to be "Ready" ...
	I0601 19:57:13.664908    6740 pod_ready.go:92] pod "cilium-operator-78f49c47f-rpgvw" in "kube-system" namespace has status "Ready":"True"
	I0601 19:57:13.665899    6740 pod_ready.go:81] duration metric: took 18.991ms waiting for pod "cilium-operator-78f49c47f-rpgvw" in "kube-system" namespace to be "Ready" ...
	I0601 19:57:13.665899    6740 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-8fw56" in "kube-system" namespace to be "Ready" ...
	I0601 19:57:15.718689    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:17.725753    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:19.767669    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:22.226540    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:24.730114    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:27.217059    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:29.715106    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:31.722416    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:33.722854    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:35.724958    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:38.226635    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:40.231407    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:42.719643    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:44.721217    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:46.721706    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:49.214386    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:51.214796    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:53.222159    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:55.223165    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:57.223877    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:59.713593    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:01.713768    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:03.725546    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:06.225166    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:08.719826    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:11.215385    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:13.224406    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:15.721309    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:17.722322    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:19.729694    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:22.220508    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:24.723519    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:27.212217    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:29.221029    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:31.713984    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:33.721278    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:36.213913    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:38.732962    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:41.211117    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:43.296829    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:45.724875    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:48.222620    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:50.224556    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:52.711981    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:54.733241    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:56.736345    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:59.220117    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:01.227409    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:03.728042    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:06.223249    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:08.717768    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:10.724501    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:13.218095    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:15.227792    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:17.228137    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:19.720833    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:21.723223    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:24.222973    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:26.731838    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:29.232474    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:31.736462    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:34.225061    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:36.723996    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:39.218681    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:41.224455    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:43.735667    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:46.232328    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:48.713551    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:50.724246    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:53.215245    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:55.216350    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:57.221976    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:59.721011    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:02.223706    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:04.229125    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:06.732947    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:09.234451    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:11.739931    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:14.227139    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:16.231339    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:18.721591    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:21.219231    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:23.233221    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:25.728579    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:28.227674    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:30.236888    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:32.717686    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:34.729103    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:36.729541    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:39.229591    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:41.731516    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:44.234358    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:46.729241    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:48.731138    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:51.225635    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:53.234371    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:55.729121    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:58.231396    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:00.717792    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:02.735821    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:05.223036    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:07.232506    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:09.725967    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:11.738750    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:13.753357    6740 pod_ready.go:102] pod "coredns-64897985d-8fw56" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:13.753357    6740 pod_ready.go:81] duration metric: took 4m0.0750959s waiting for pod "coredns-64897985d-8fw56" in "kube-system" namespace to be "Ready" ...
	E0601 20:01:13.753357    6740 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0601 20:01:13.753357    6740 pod_ready.go:38] duration metric: took 8m0.5366812s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 20:01:13.757354    6740 out.go:177] 
	W0601 20:01:13.760364    6740 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0601 20:01:13.760364    6740 out.go:239] * 
	* 
	W0601 20:01:13.761360    6740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 20:01:13.764358    6740 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (621.23s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (85.41s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220601194928-3412 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p pause-20220601194928-3412 --alsologtostderr -v=5: exit status 80 (9.0197707s)

                                                
                                                
-- stdout --
	* Pausing node pause-20220601194928-3412 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 19:53:11.092417    2356 out.go:296] Setting OutFile to fd 1940 ...
	I0601 19:53:11.172404    2356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:53:11.172404    2356 out.go:309] Setting ErrFile to fd 1956...
	I0601 19:53:11.172404    2356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:53:11.197404    2356 out.go:303] Setting JSON to false
	I0601 19:53:11.197404    2356 mustload.go:65] Loading cluster: pause-20220601194928-3412
	I0601 19:53:11.198414    2356 config.go:178] Loaded profile config "pause-20220601194928-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:53:11.216411    2356 cli_runner.go:164] Run: docker container inspect pause-20220601194928-3412 --format={{.State.Status}}
	I0601 19:53:14.748895    2356 cli_runner.go:217] Completed: docker container inspect pause-20220601194928-3412 --format={{.State.Status}}: (3.5323006s)
	I0601 19:53:14.748895    2356 host.go:66] Checking if "pause-20220601194928-3412" exists ...
	I0601 19:53:14.756249    2356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220601194928-3412
	I0601 19:53:15.956847    2356 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220601194928-3412: (1.2002384s)
	I0601 19:53:15.958088    2356 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0601 19:53:15.981055    2356 out.go:177] * Pausing node pause-20220601194928-3412 ... 
	I0601 19:53:15.985748    2356 host.go:66] Checking if "pause-20220601194928-3412" exists ...
	I0601 19:53:15.997858    2356 ssh_runner.go:195] Run: systemctl --version
	I0601 19:53:16.003612    2356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601194928-3412
	I0601 19:53:17.086067    2356 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601194928-3412: (1.0823984s)
	I0601 19:53:17.086067    2356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61074 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220601194928-3412\id_rsa Username:docker}
	I0601 19:53:17.204226    2356 ssh_runner.go:235] Completed: systemctl --version: (1.2063046s)
	I0601 19:53:17.217072    2356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 19:53:17.247572    2356 pause.go:50] kubelet running: true
	I0601 19:53:17.257853    2356 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0601 19:53:17.596531    2356 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0601 19:53:17.671361    2356 docker.go:459] Pausing containers: [f19367b3a8f4 d05d6272ca57 746b0665f137 f1aa86d0d8db bff3c609c584 fbb3a2020c77 723ae8a7be41 40b58d79d3b2 7bdc3ac58272 6a1f8d9c25e7 9f1178964140 988ce573a0b0 81893bc03c3b 3d0fdf412d9a]
	I0601 19:53:17.679358    2356 ssh_runner.go:195] Run: docker pause f19367b3a8f4 d05d6272ca57 746b0665f137 f1aa86d0d8db bff3c609c584 fbb3a2020c77 723ae8a7be41 40b58d79d3b2 7bdc3ac58272 6a1f8d9c25e7 9f1178964140 988ce573a0b0 81893bc03c3b 3d0fdf412d9a
	I0601 19:53:19.670312    2356 ssh_runner.go:235] Completed: docker pause f19367b3a8f4 d05d6272ca57 746b0665f137 f1aa86d0d8db bff3c609c584 fbb3a2020c77 723ae8a7be41 40b58d79d3b2 7bdc3ac58272 6a1f8d9c25e7 9f1178964140 988ce573a0b0 81893bc03c3b 3d0fdf412d9a: (1.9908503s)
	I0601 19:53:19.691398    2356 out.go:177] 
	W0601 19:53:19.703426    2356 out.go:239] X Exiting due to GUEST_PAUSE: docker: docker pause f19367b3a8f4 d05d6272ca57 746b0665f137 f1aa86d0d8db bff3c609c584 fbb3a2020c77 723ae8a7be41 40b58d79d3b2 7bdc3ac58272 6a1f8d9c25e7 9f1178964140 988ce573a0b0 81893bc03c3b 3d0fdf412d9a: Process exited with status 1
	stdout:
	f19367b3a8f4
	d05d6272ca57
	746b0665f137
	f1aa86d0d8db
	bff3c609c584
	fbb3a2020c77
	40b58d79d3b2
	7bdc3ac58272
	6a1f8d9c25e7
	9f1178964140
	988ce573a0b0
	81893bc03c3b
	3d0fdf412d9a
	
	stderr:
	Error response from daemon: Cannot pause container 723ae8a7be4143203ce3e98a2242812da835b79477739276946430d7126424b7: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: docker: docker pause f19367b3a8f4 d05d6272ca57 746b0665f137 f1aa86d0d8db bff3c609c584 fbb3a2020c77 723ae8a7be41 40b58d79d3b2 7bdc3ac58272 6a1f8d9c25e7 9f1178964140 988ce573a0b0 81893bc03c3b 3d0fdf412d9a: Process exited with status 1
	stdout:
	f19367b3a8f4
	d05d6272ca57
	746b0665f137
	f1aa86d0d8db
	bff3c609c584
	fbb3a2020c77
	40b58d79d3b2
	7bdc3ac58272
	6a1f8d9c25e7
	9f1178964140
	988ce573a0b0
	81893bc03c3b
	3d0fdf412d9a
	
	stderr:
	Error response from daemon: Cannot pause container 723ae8a7be4143203ce3e98a2242812da835b79477739276946430d7126424b7: OCI runtime pause failed: unable to freeze: unknown
	
	W0601 19:53:19.704413    2356 out.go:239] * 
	* 
	W0601 19:53:19.748177    2356 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 19:53:19.798150    2356 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-windows-amd64.exe pause -p pause-20220601194928-3412 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220601194928-3412
helpers_test.go:231: (dbg) Done: docker inspect pause-20220601194928-3412: (1.3677168s)
helpers_test.go:235: (dbg) docker inspect pause-20220601194928-3412:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094",
	        "Created": "2022-06-01T19:50:31.782444Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 190257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:50:33.2539211Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094/hostname",
	        "HostsPath": "/var/lib/docker/containers/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094/hosts",
	        "LogPath": "/var/lib/docker/containers/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094-json.log",
	        "Name": "/pause-20220601194928-3412",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20220601194928-3412:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220601194928-3412",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/955efc43298e2b985cbbb66b013e5d1f10e14c34487c38aa7ea385034ab9b98a-init/diff:/var/lib/docker/overlay2/487b259deb346e6ca1e96023cfc1832638489725b45384e10e2c2effe462993c/diff:/var/lib/docker/overlay2/7830a7ee158a10893945c1b577efeb821d499cce7646d95d3c0cffb3ed372dca/diff:/var/lib/docker/overlay2/6fe83b204fd4124b69c52dc2b8620b75ac92764b58a8d1af6662ff240e517719/diff:/var/lib/docker/overlay2/6362560b46c9fab8d6514c8429f6275481f64020b6a76226333ec63d40b3509c/diff:/var/lib/docker/overlay2/b947dedac2c38cb9982c9b363e89606d658250ef2798320fdf3517f747048abd/diff:/var/lib/docker/overlay2/bc2839e6d5fd56592e9530bb7f1f81ed9502bdb7539e7f429732e9cf4cd3b17d/diff:/var/lib/docker/overlay2/1b3239e13a55e9fa626a7541842d884445974471039cc2d9226ad10f2b953536/diff:/var/lib/docker/overlay2/1884c2d81ecac540a3174fb86cefef2fd199eaa5c78d29afe6c63aff263f9584/diff:/var/lib/docker/overlay2/d1c361312180db411937b7786e1329e12f9ed7b9439d4574d6d9a237a8ef8a9e/diff:/var/lib/docker/overlay2/15125b
9e77872950f8bc77e7ec27026feb64d93311200f76586c570bbceb3810/diff:/var/lib/docker/overlay2/1778c10167346a2b58dd494e4689512b56050eed4b6df53a451f9aa373c3af35/diff:/var/lib/docker/overlay2/e45fa45d984d0fdd2eaca3b15c5e81abaa51b6b84fc051f20678d16cb6548a34/diff:/var/lib/docker/overlay2/54cea2bf354fab8e2c392a574195b06b919122ff6a1fb01b05f554ba43d9719e/diff:/var/lib/docker/overlay2/8667e3403c29f1a18aaababc226712f548d7dd623a4b9ac413520cf72955fb40/diff:/var/lib/docker/overlay2/5d20284be4fd7015d5b8eb6ae55b108a262e3c66cdaa9a8c4c23a6eb1726d4da/diff:/var/lib/docker/overlay2/d623242b443d7de7f75761cda756115d0f9df9f3b73144554928ceac06876d5b/diff:/var/lib/docker/overlay2/143dd7f527aa222e0eeaafe5e0182140c95e402aa335e7994b2aa7f1e6b6ba3c/diff:/var/lib/docker/overlay2/d690aea98cc6cb39fdd3f6660997b792085628157b14d576701adc72d3e6cf55/diff:/var/lib/docker/overlay2/2bb1d07709342e3bcb4feda7dc7d17fa9707986bf88cd7dc52eab255748276e0/diff:/var/lib/docker/overlay2/ea79e7f8097cf29c435b8a18ee6332b067ec4f7858b6eaabf897d2076a8deb3e/diff:/var/lib/d
ocker/overlay2/dab209c0bb58d228f914118438b0a79649c46857e6fcb416c0c556c049154f5d/diff:/var/lib/docker/overlay2/3bd421aaea3202bb8715cdd0f452aa411f20f2025b05d6a03811ebc7d0347896/diff:/var/lib/docker/overlay2/7dc112f5a6dc7809e579b2eaaeef54d3d5ee1326e9f35817dad641bc4e2c095a/diff:/var/lib/docker/overlay2/772b23d424621d351ce90f47e351441dc7fb224576441813bb86be52c0552022/diff:/var/lib/docker/overlay2/86ea33f163c6d58acb53a8e5bb27e1c131a6c915d7459ca03c90383b299fde58/diff:/var/lib/docker/overlay2/58deaba6fb571643d48dd090dd850eeb8fd343f41591580f4509fe61280e87de/diff:/var/lib/docker/overlay2/d8e5be8b94fe5858e777434bd7d360128719def82a5e7946fd4cb69aecab39fe/diff:/var/lib/docker/overlay2/a319e02b15899f20f933362a00fa40be829441edea2a0be36cc1e30b3417cf57/diff:/var/lib/docker/overlay2/b315efdf7f2b5f50f74664829533097f21ab8bda14478b76e9b5781079830b20/diff:/var/lib/docker/overlay2/bb96faec132eb5919c94fc772f61e63514308af6f72ec141483a94a85a77cc3b/diff:/var/lib/docker/overlay2/55dbff36528117ad96b3be9ee2396f7faee2f0b493773aa5abf5ba2b23a
5f728/diff:/var/lib/docker/overlay2/f11da52264a1f34c3b2180d2adcbcb7cc077c7f91611974bf0946d6bea248de5/diff:/var/lib/docker/overlay2/6ca19b0a8327fcd8f60b06c6b0f4519ff5f0f3eacd034e6c5c16ed45239f2238/diff:/var/lib/docker/overlay2/f86ed588a9cb5b359a174312bf8595e8e896ba3d8922b0bae1d8839518d24fb6/diff:/var/lib/docker/overlay2/0bf0e1906e62c903f71626646e2339b8e2c809d40948898d803dcaf0218ed0dd/diff:/var/lib/docker/overlay2/c8ff277ec5a9fa0db24ad64c7e0523b2b5a5c7d64f2148a0c9823fdd5bc60cad/diff:/var/lib/docker/overlay2/4cfbf9fc2a4a968773220ae74312f07a616afc80cbf9a4b68e2c2357c09ca009/diff:/var/lib/docker/overlay2/9a235e4b15bee3f10260f9356535723bf351a49b1f19af094d59a1439b7a9632/diff:/var/lib/docker/overlay2/9699d245a454ce1e21f1ac875a0910a63fb34d3d2870f163d8b6d258f33c2f4f/diff:/var/lib/docker/overlay2/6e093a9dfe282a2a53a4081251541e0c5b4176bb42d9c9bf908f19b1fdc577f5/diff:/var/lib/docker/overlay2/98036438a55a1794d298c11dc1eb0633e06ed433b84d24a3972e634a0b11deb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/955efc43298e2b985cbbb66b013e5d1f10e14c34487c38aa7ea385034ab9b98a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/955efc43298e2b985cbbb66b013e5d1f10e14c34487c38aa7ea385034ab9b98a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/955efc43298e2b985cbbb66b013e5d1f10e14c34487c38aa7ea385034ab9b98a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20220601194928-3412",
	                "Source": "/var/lib/docker/volumes/pause-20220601194928-3412/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220601194928-3412",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220601194928-3412",
	                "name.minikube.sigs.k8s.io": "pause-20220601194928-3412",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "156f723e1e9d36ef19e56832e80e0b7533826382adbb71a72695a73b1d2b7ad3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61076"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61078"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/156f723e1e9d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220601194928-3412": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "18c3a3351267",
	                        "pause-20220601194928-3412"
	                    ],
	                    "NetworkID": "236303b3a2bb67679c11e5044d40d1907ff737afd0b0490c48a6dcf0ed6cc3df",
	                    "EndpointID": "6a11984d145c8b540d3966a00ec4b85aa5a0027325a6df1dd811bb8c29609da6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220601194928-3412 -n pause-20220601194928-3412

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220601194928-3412 -n pause-20220601194928-3412: exit status 2 (8.229251s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-20220601194928-3412 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-20220601194928-3412 logs -n 25: (24.1909746s)
helpers_test.go:252: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |       User        |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:44 GMT | 01 Jun 22 19:46 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	|         | --memory=2200                          |                                        |                   |                |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                                        |                   |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |                   |                |                     |                     |
	| stop    | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:46 GMT | 01 Jun 22 19:46 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	| start   | -p                                     | missing-upgrade-20220601194025-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:44 GMT | 01 Jun 22 19:47 GMT |
	|         | missing-upgrade-20220601194025-3412    |                                        |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |                   |                |                     |                     |
	|         | -v=1 --driver=docker                   |                                        |                   |                |                     |                     |
	| start   | -p                                     | stopped-upgrade-20220601194002-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:45 GMT | 01 Jun 22 19:47 GMT |
	|         | stopped-upgrade-20220601194002-3412    |                                        |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |                   |                |                     |                     |
	|         | -v=1 --driver=docker                   |                                        |                   |                |                     |                     |
	| logs    | -p                                     | stopped-upgrade-20220601194002-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:47 GMT | 01 Jun 22 19:47 GMT |
	|         | stopped-upgrade-20220601194002-3412    |                                        |                   |                |                     |                     |
	| delete  | -p                                     | missing-upgrade-20220601194025-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:47 GMT | 01 Jun 22 19:47 GMT |
	|         | missing-upgrade-20220601194025-3412    |                                        |                   |                |                     |                     |
	| delete  | -p                                     | stopped-upgrade-20220601194002-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:47 GMT | 01 Jun 22 19:47 GMT |
	|         | stopped-upgrade-20220601194002-3412    |                                        |                   |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:46 GMT | 01 Jun 22 19:48 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	|         | --memory=2200                          |                                        |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |                   |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |                   |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220601193729-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:48 GMT | 01 Jun 22 19:49 GMT |
	|         | cert-expiration-20220601193729-3412    |                                        |                   |                |                     |                     |
	|         | --memory=2048                          |                                        |                   |                |                     |                     |
	|         | --cert-expiration=8760h                |                                        |                   |                |                     |                     |
	|         | --driver=docker                        |                                        |                   |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:48 GMT | 01 Jun 22 19:49 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	|         | --memory=2200                          |                                        |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |                   |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |                   |                |                     |                     |
	| delete  | -p                                     | cert-expiration-20220601193729-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:49 GMT | 01 Jun 22 19:49 GMT |
	|         | cert-expiration-20220601193729-3412    |                                        |                   |                |                     |                     |
	| delete  | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:49 GMT | 01 Jun 22 19:49 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	| start   | -p                                     | cert-options-20220601194744-3412       | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:47 GMT | 01 Jun 22 19:50 GMT |
	|         | cert-options-20220601194744-3412       |                                        |                   |                |                     |                     |
	|         | --memory=2048                          |                                        |                   |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                                        |                   |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                                        |                   |                |                     |                     |
	|         | --apiserver-names=localhost            |                                        |                   |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                                        |                   |                |                     |                     |
	|         | --apiserver-port=8555                  |                                        |                   |                |                     |                     |
	|         | --driver=docker                        |                                        |                   |                |                     |                     |
	|         | --apiserver-name=localhost             |                                        |                   |                |                     |                     |
	| ssh     | cert-options-20220601194744-3412       | cert-options-20220601194744-3412       | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:50 GMT | 01 Jun 22 19:50 GMT |
	|         | ssh openssl x509 -text -noout -in      |                                        |                   |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |                   |                |                     |                     |
	| ssh     | -p                                     | cert-options-20220601194744-3412       | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:50 GMT | 01 Jun 22 19:50 GMT |
	|         | cert-options-20220601194744-3412       |                                        |                   |                |                     |                     |
	|         | -- sudo cat                            |                                        |                   |                |                     |                     |
	|         | /etc/kubernetes/admin.conf             |                                        |                   |                |                     |                     |
	| delete  | -p                                     | cert-options-20220601194744-3412       | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:50 GMT | 01 Jun 22 19:50 GMT |
	|         | cert-options-20220601194744-3412       |                                        |                   |                |                     |                     |
	| start   | -p pause-20220601194928-3412           | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:49 GMT | 01 Jun 22 19:52 GMT |
	|         | --memory=2048                          |                                        |                   |                |                     |                     |
	|         | --install-addons=false                 |                                        |                   |                |                     |                     |
	|         | --wait=all --driver=docker             |                                        |                   |                |                     |                     |
	| start   | -p auto-20220601193434-3412            | auto-20220601193434-3412               | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:49 GMT | 01 Jun 22 19:52 GMT |
	|         | --memory=2048                          |                                        |                   |                |                     |                     |
	|         | --alsologtostderr                      |                                        |                   |                |                     |                     |
	|         | --wait=true --wait-timeout=5m          |                                        |                   |                |                     |                     |
	|         | --driver=docker                        |                                        |                   |                |                     |                     |
	| ssh     | -p auto-20220601193434-3412            | auto-20220601193434-3412               | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:52 GMT |
	|         | pgrep -a kubelet                       |                                        |                   |                |                     |                     |
	| start   | -p pause-20220601194928-3412           | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:52 GMT |
	|         | --alsologtostderr -v=1                 |                                        |                   |                |                     |                     |
	|         | --driver=docker                        |                                        |                   |                |                     |                     |
	| start   | -p                                     | running-upgrade-20220601194733-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:51 GMT | 01 Jun 22 19:52 GMT |
	|         | running-upgrade-20220601194733-3412    |                                        |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |                   |                |                     |                     |
	|         | -v=1 --driver=docker                   |                                        |                   |                |                     |                     |
	| pause   | -p pause-20220601194928-3412           | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:52 GMT |
	|         | --alsologtostderr -v=5                 |                                        |                   |                |                     |                     |
	| unpause | -p pause-20220601194928-3412           | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:53 GMT |
	|         | --alsologtostderr -v=5                 |                                        |                   |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220601194733-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:53 GMT |
	|         | running-upgrade-20220601194733-3412    |                                        |                   |                |                     |                     |
	| delete  | -p auto-20220601193434-3412            | auto-20220601193434-3412               | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:53 GMT |
	|---------|----------------------------------------|----------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* I0601 19:53:24.281895    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:26.285103    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	Log file created at: 2022/06/01 19:53:28
	Running on machine: minikube4
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 19:53:28.486890   11184 out.go:296] Setting OutFile to fd 1752 ...
	I0601 19:53:28.550876   11184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:53:28.550876   11184 out.go:309] Setting ErrFile to fd 1764...
	I0601 19:53:28.550876   11184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:53:28.562881   11184 out.go:303] Setting JSON to false
	I0601 19:53:28.565889   11184 start.go:115] hostinfo: {"hostname":"minikube4","uptime":73323,"bootTime":1654039885,"procs":172,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 19:53:28.565889   11184 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 19:53:28.571877   11184 out.go:177] * [false-20220601193442-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 19:53:28.575905   11184 notify.go:193] Checking for updates...
	I0601 19:53:28.581916   11184 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 19:53:28.587900   11184 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 19:53:28.594908   11184 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 19:53:28.599896   11184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 19:50:34 UTC, end at Wed 2022-06-01 19:53:36 UTC. --
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.309051400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.309131000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.309176600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.312181900Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.312315500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.312452700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.312515900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.335315200Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.353908800Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354015500Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354032500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354048100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354056000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354063400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354420200Z" level=info msg="Loading containers: start."
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.593425100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.693203600Z" level=info msg="Loading containers: done."
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.718021800Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.718175100Z" level=info msg="Daemon has completed initialization"
	Jun 01 19:50:57 pause-20220601194928-3412 systemd[1]: Started Docker Application Container Engine.
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.788665400Z" level=info msg="API listen on [::]:2376"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.794543200Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 19:52:05 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:52:05.131083900Z" level=info msg="ignoring event" container=73b4e698b498ee66d9377f22adbac1b577f1b75a7fe208c9258dea44477457e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:52:05 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:52:05.393811800Z" level=info msg="ignoring event" container=17383e44086b7bf7da720381011c707ded900769814831bbf167cb77b922bed7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:53:18 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:53:18.222568500Z" level=error msg="Handler for POST /v1.41/containers/723ae8a7be41/pause returned error: Cannot pause container 723ae8a7be4143203ce3e98a2242812da835b79477739276946430d7126424b7: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* time="2022-06-01T19:53:38Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS                       PORTS     NAMES
	f19367b3a8f4   6e38f40d628d           "/storage-provisioner"   55 seconds ago       Up 54 seconds (Paused)                 k8s_storage-provisioner_storage-provisioner_kube-system_9841ef75-f6f7-4546-8b4e-2aadf5699c2b_0
	d05d6272ca57   k8s.gcr.io/pause:3.6   "/pause"                 55 seconds ago       Up 54 seconds (Paused)                 k8s_POD_storage-provisioner_kube-system_9841ef75-f6f7-4546-8b4e-2aadf5699c2b_0
	746b0665f137   a4ca41631cc7           "/coredns -conf /etc…"   About a minute ago   Up About a minute (Paused)             k8s_coredns_coredns-64897985d-7n8j8_kube-system_9256be30-fb9d-40ab-867e-89615489d771_0
	f1aa86d0d8db   4c0375452406           "/usr/local/bin/kube…"   About a minute ago   Up About a minute (Paused)             k8s_kube-proxy_kube-proxy-5zrp5_kube-system_407893ee-127b-4049-b043-da518058f009_0
	bff3c609c584   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_coredns-64897985d-7n8j8_kube-system_9256be30-fb9d-40ab-867e-89615489d771_0
	fbb3a2020c77   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-proxy-5zrp5_kube-system_407893ee-127b-4049-b043-da518058f009_0
	723ae8a7be41   25f8c7f3da61           "etcd --advertise-cl…"   2 minutes ago        Up 2 minutes                           k8s_etcd_etcd-pause-20220601194928-3412_kube-system_e6809dca5ea4d80a1c02803b0a98b488_0
	40b58d79d3b2   df7b72818ad2           "kube-controller-man…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-controller-manager_kube-controller-manager-pause-20220601194928-3412_kube-system_4679fba103d87cd475bcbad3d12eacc5_0
	7bdc3ac58272   595f327f224a           "kube-scheduler --au…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-scheduler_kube-scheduler-pause-20220601194928-3412_kube-system_42764eeb51bfab545d2537d74337e71c_0
	6a1f8d9c25e7   8fa62c12256d           "kube-apiserver --ad…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-apiserver_kube-apiserver-pause-20220601194928-3412_kube-system_87e851f4b00765eb831c0ab86bae4ace_0
	9f1178964140   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-controller-manager-pause-20220601194928-3412_kube-system_4679fba103d87cd475bcbad3d12eacc5_0
	988ce573a0b0   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_etcd-pause-20220601194928-3412_kube-system_e6809dca5ea4d80a1c02803b0a98b488_0
	81893bc03c3b   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-scheduler-pause-20220601194928-3412_kube-system_42764eeb51bfab545d2537d74337e71c_0
	3d0fdf412d9a   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-apiserver-pause-20220601194928-3412_kube-system_87e851f4b00765eb831c0ab86bae4ace_0
	
	* 
	* ==> coredns [746b0665f137] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jun 1 19:32] WSL2: Performing memory compaction.
	[Jun 1 19:33] WSL2: Performing memory compaction.
	[Jun 1 19:34] WSL2: Performing memory compaction.
	[Jun 1 19:35] WSL2: Performing memory compaction.
	[Jun 1 19:37] WSL2: Performing memory compaction.
	[Jun 1 19:38] WSL2: Performing memory compaction.
	[Jun 1 19:39] WSL2: Performing memory compaction.
	[Jun 1 19:41] WSL2: Performing memory compaction.
	[ +31.599109] process 'docker/tmp/qemu-check383814209/check' started with executable stack
	[Jun 1 19:42] WSL2: Performing memory compaction.
	[Jun 1 19:43] WSL2: Performing memory compaction.
	[Jun 1 19:47] WSL2: Performing memory compaction.
	[Jun 1 19:49] WSL2: Performing memory compaction.
	[Jun 1 19:50] WSL2: Performing memory compaction.
	[Jun 1 19:51] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.006260] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010917] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000002] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000003] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun 1 19:53] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [723ae8a7be41] <==
	* {"level":"info","ts":"2022-06-01T19:53:15.318Z","caller":"traceutil/trace.go:171","msg":"trace[1077161459] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:543; }","duration":"591.8406ms","start":"2022-06-01T19:53:14.726Z","end":"2022-06-01T19:53:15.318Z","steps":["trace[1077161459] 'agreement among raft nodes before linearized reading'  (duration: 558.9343ms)","trace[1077161459] 'count revisions from in-memory index tree'  (duration: 32.5684ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T19:53:15.318Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:14.726Z","time spent":"591.9183ms","remote":"127.0.0.1:36762","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":29,"request content":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true "}
	{"level":"warn","ts":"2022-06-01T19:53:15.318Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"627.9297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3582"}
	{"level":"info","ts":"2022-06-01T19:53:15.318Z","caller":"traceutil/trace.go:171","msg":"trace[2020903619] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:543; }","duration":"628.5132ms","start":"2022-06-01T19:53:14.690Z","end":"2022-06-01T19:53:15.318Z","steps":["trace[2020903619] 'agreement among raft nodes before linearized reading'  (duration: 595.1702ms)","trace[2020903619] 'range keys from in-memory index tree'  (duration: 32.714ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T19:53:15.319Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:14.690Z","time spent":"628.6447ms","remote":"127.0.0.1:36678","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":3606,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2022-06-01T19:53:15.846Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:16.347Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:16.848Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:17.350Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:17.576Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.230288s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-7n8j8\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2022-06-01T19:53:17.577Z","caller":"traceutil/trace.go:171","msg":"trace[297424335] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-7n8j8; range_end:; }","duration":"2.2308765s","start":"2022-06-01T19:53:15.345Z","end":"2022-06-01T19:53:17.576Z","steps":["trace[297424335] 'agreement among raft nodes before linearized reading'  (duration: 2.2302121s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:53:17.577Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:15.345Z","time spent":"2.2313245s","remote":"127.0.0.1:36678","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/pods/kube-system/coredns-64897985d-7n8j8\" "}
	WARNING: 2022/06/01 19:53:17 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-06-01T19:53:17.873Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:18.374Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:18.874Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:19.375Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:19.544Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"4.2028531s","expected-duration":"1s"}
	{"level":"info","ts":"2022-06-01T19:53:19.544Z","caller":"traceutil/trace.go:171","msg":"trace[1986365679] linearizableReadLoop","detail":"{readStateIndex:577; appliedIndex:577; }","duration":"4.1989464s","start":"2022-06-01T19:53:15.345Z","end":"2022-06-01T19:53:19.544Z","steps":["trace[1986365679] 'read index received'  (duration: 4.1989009s)","trace[1986365679] 'applied index is now lower than readState.Index'  (duration: 41.3µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T19:53:19.548Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.1264244s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1127"}
	{"level":"warn","ts":"2022-06-01T19:53:19.548Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"4.0660694s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"}
	{"level":"info","ts":"2022-06-01T19:53:19.548Z","caller":"traceutil/trace.go:171","msg":"trace[1379093940] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:544; }","duration":"3.1268307s","start":"2022-06-01T19:53:16.421Z","end":"2022-06-01T19:53:19.548Z","steps":["trace[1379093940] 'agreement among raft nodes before linearized reading'  (duration: 3.1237342s)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T19:53:19.548Z","caller":"traceutil/trace.go:171","msg":"trace[375780806] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:544; }","duration":"4.0662104s","start":"2022-06-01T19:53:15.482Z","end":"2022-06-01T19:53:19.548Z","steps":["trace[375780806] 'agreement among raft nodes before linearized reading'  (duration: 4.0628643s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:53:19.548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:16.421Z","time spent":"3.1269424s","remote":"127.0.0.1:36672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1151,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2022-06-01T19:53:19.548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:15.482Z","time spent":"4.0663206s","remote":"127.0.0.1:36670","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":367,"request content":"key:\"/registry/namespaces/default\" "}
	
	* 
	* ==> kernel <==
	*  19:53:51 up  2:15,  0 users,  load average: 8.08, 7.01, 4.63
	Linux pause-20220601194928-3412 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [6a1f8d9c25e7] <==
	* I0601 19:51:38.002633       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 19:51:40.278861       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 19:51:40.386106       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 19:51:40.407075       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 19:51:41.571710       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 19:51:52.805503       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 19:51:52.808864       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 19:51:57.325312       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 19:53:10.248057       1 trace.go:205] Trace[89863760]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:6615ff52-a992-40b8-8e39-cfa59bac3b46,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 19:53:05.703) (total time: 4542ms):
	Trace[89863760]: ---"About to write a response" 4541ms (19:53:10.247)
	Trace[89863760]: [4.5420569s] [4.5420569s] END
	I0601 19:53:10.248223       1 trace.go:205] Trace[1228988290]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:737f44b8-1816-4125-bec9-24dd1c0a4407,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (01-Jun-2022 19:53:05.587) (total time: 4657ms):
	Trace[1228988290]: ---"About to write a response" 4657ms (19:53:10.247)
	Trace[1228988290]: [4.6578647s] [4.6578647s] END
	I0601 19:53:15.320241       1 trace.go:205] Trace[342865520]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:a7f95376-4876-41a6-92ff-8e3d4685ec65,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Jun-2022 19:53:14.687) (total time: 632ms):
	Trace[342865520]: ---"About to write a response" 632ms (19:53:15.319)
	Trace[342865520]: [632.8275ms] [632.8275ms] END
	{"level":"warn","ts":"2022-06-01T19:53:17.573Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00068d880/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0601 19:53:17.574970       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0601 19:53:17.575880       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0601 19:53:17.577212       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0601 19:53:17.578720       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0601 19:53:17.580430       1 trace.go:205] Trace[1291614787]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-64897985d-7n8j8,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:2c33d721-0b90-408c-b829-8932696a4c43,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Jun-2022 19:53:15.345) (total time: 2235ms):
	Trace[1291614787]: [2.2353835s] [2.2353835s] END
	E0601 19:53:17.583773       1 timeout.go:141] post-timeout activity - time-elapsed: 9.5089ms, GET "/api/v1/namespaces/kube-system/pods/coredns-64897985d-7n8j8" result: <nil>
	
	* 
	* ==> kube-controller-manager [40b58d79d3b2] <==
	* I0601 19:51:52.284756       1 range_allocator.go:173] Starting range CIDR allocator
	I0601 19:51:52.284774       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0601 19:51:52.284816       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0601 19:51:52.285586       1 shared_informer.go:247] Caches are synced for GC 
	I0601 19:51:52.374158       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 19:51:52.375051       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 19:51:52.375384       1 disruption.go:371] Sending events to api server.
	I0601 19:51:52.375259       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:51:52.377826       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:51:52.378373       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 19:51:52.499240       1 event.go:294] "Event occurred" object="kube-system/etcd-pause-20220601194928-3412" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 19:51:52.499254       1 range_allocator.go:374] Set node pause-20220601194928-3412 PodCIDR to [10.244.0.0/24]
	I0601 19:51:52.674605       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-pause-20220601194928-3412" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 19:51:52.676506       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-pause-20220601194928-3412" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 19:51:52.676539       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-pause-20220601194928-3412" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 19:51:52.792800       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:51:52.868927       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:51:52.868964       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 19:51:52.888345       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 19:51:53.069897       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5zrp5"
	I0601 19:51:53.383009       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-w4nzs"
	I0601 19:51:53.393823       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-7n8j8"
	I0601 19:51:53.782934       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 19:51:53.796418       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-w4nzs"
	I0601 19:51:57.272822       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [f1aa86d0d8db] <==
	* E0601 19:51:56.908713       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0601 19:51:56.913327       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0601 19:51:56.971538       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0601 19:51:56.975259       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0601 19:51:56.978747       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0601 19:51:56.982135       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0601 19:51:57.083988       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 19:51:57.084062       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 19:51:57.084119       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:51:57.316337       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:51:57.316449       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:51:57.316462       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:51:57.316487       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:51:57.320759       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:51:57.321686       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:51:57.321824       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:51:57.322069       1 config.go:317] "Starting service config controller"
	I0601 19:51:57.322089       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:51:57.470035       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 19:51:57.470036       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [7bdc3ac58272] <==
	* W0601 19:51:34.481172       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:51:34.481283       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 19:51:34.488738       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 19:51:34.488843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 19:51:34.505925       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 19:51:34.506073       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 19:51:34.548412       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 19:51:34.548575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 19:51:34.671236       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 19:51:34.671354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 19:51:34.708713       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:51:34.708886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 19:51:34.719519       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 19:51:34.719635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 19:51:34.769874       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:51:34.770104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:51:36.070305       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 19:51:36.070512       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:51:36.148669       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:51:36.148791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 19:51:36.158656       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 19:51:36.158803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 19:51:36.225793       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 19:51:36.225908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0601 19:51:40.996488       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 19:50:34 UTC, end at Wed 2022-06-01 19:53:52 UTC. --
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod83e593fc-f274-441f-bf87-fabed1026c3c/02dfa67b52ce3f4b14fd33a413c6af320d1ba7cc3c2b0b2079d05554c77ec615: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod1a73b20778048cd7301f3a2e09e18ac9/108c605be7622a53fa310f8de81dbb080d1a4ea352656d14c07fd5cb6153678d: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod9e3eeab8-02f8-479f-a2ba-a3331060893e/3aca3289aa7d621ae364a025824538c36d17fedd012b7e1c6ccb86bcada1a02f: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod83e593fc-f274-441f-bf87-fabed1026c3c/02dfa67b52ce3f4b14fd33a413c6af320d1ba7cc3c2b0b2079d05554c77ec615: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod96f6123b-0375-4783-a3c1-5eb5492e2274/d32bfba3613c906952c9929996edde1a3ec32306c3a0356389cac2597a455ad7: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod1a73b20778048cd7301f3a2e09e18ac9/108c605be7622a53fa310f8de81dbb080d1a4ea352656d14c07fd5cb6153678d: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872435    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod83e593fc-f274-441f-bf87-fabed1026c3c] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod83e593fc-f274-441f-bf87-fabed1026c3c] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod83e593fc-f274-441f-bf87-fabed1026c3c]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872479    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod1a73b20778048cd7301f3a2e09e18ac9] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1a73b20778048cd7301f3a2e09e18ac9] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod1a73b20778048cd7301f3a2e09e18ac9]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872478    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod96f6123b-0375-4783-a3c1-5eb5492e2274] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod96f6123b-0375-4783-a3c1-5eb5492e2274] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod96f6123b-0375-4783-a3c1-5eb5492e2274]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod7ee49731-86a3-4152-8fad-6ab9c8ee36e1/f99b758f60f4bba23d4a6c19171868150b974d8db062e878e28d4c5fcd241b3e: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872818    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod7ee49731-86a3-4152-8fad-6ab9c8ee36e1] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod7ee49731-86a3-4152-8fad-6ab9c8ee36e1] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod7ee49731-86a3-4152-8fad-6ab9c8ee36e1]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod8ad7f5d9bf658e62a6dc5768e6dddc63/4ce8d4c5d3747e160349a3b42b715c73e36ff5d52e105dc8ffa7a0f8f05ffb7a: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/poda0e38c0b-c6b2-481c-a30c-59149b6df031/641447dc985ff54b52236bd6718c4bc5369e3a8f8d9179e66346931c0b259fa7: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872919    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod8ad7f5d9bf658e62a6dc5768e6dddc63] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod8ad7f5d9bf658e62a6dc5768e6dddc63] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod8ad7f5d9bf658e62a6dc5768e6dddc63]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872958    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort poda0e38c0b-c6b2-481c-a30c-59149b6df031] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda0e38c0b-c6b2-481c-a30c-59149b6df031] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/poda0e38c0b-c6b2-481c-a30c-59149b6df031]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod7e80c9371e0031ad77b595f19cd7dead/76314aa4d16d0cd9ffe03cc22b7b679d83cac154b1f10d1343459c8c1e735dbf: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod9e3eeab8-02f8-479f-a2ba-a3331060893e/3aca3289aa7d621ae364a025824538c36d17fedd012b7e1c6ccb86bcada1a02f: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.873061    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod7e80c9371e0031ad77b595f19cd7dead] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod7e80c9371e0031ad77b595f19cd7dead] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod7e80c9371e0031ad77b595f19cd7dead]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podb7baff4b11c83009c03ac4e10ad08172/78a06b8322d2ca8ed78fda46f898af8db5aea3f5dbcd8b2e3be0c8bba52aab8b: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.873079    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod9e3eeab8-02f8-479f-a2ba-a3331060893e] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod9e3eeab8-02f8-479f-a2ba-a3331060893e] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod9e3eeab8-02f8-479f-a2ba-a3331060893e]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.873166    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podb7baff4b11c83009c03ac4e10ad08172] err="unable to destroy cgroup paths for cgroup [kubepods burstable podb7baff4b11c83009c03ac4e10ad08172] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podb7baff4b11c83009c03ac4e10ad08172]"
	Jun 01 19:53:17 pause-20220601194928-3412 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 01 19:53:17 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:17.449635    4120 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 01 19:53:17 pause-20220601194928-3412 systemd[1]: kubelet.service: Succeeded.
	Jun 01 19:53:17 pause-20220601194928-3412 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f19367b3a8f4] <==
	* I0601 19:52:44.461736       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 19:52:44.499175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 19:52:44.499429       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 19:52:44.529956       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 19:52:44.530598       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25461831-d366-436c-9a64-a8a6c6e16c64", APIVersion:"v1", ResourceVersion:"517", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220601194928-3412_31814002-4509-4ecb-af8a-4be39fe88031 became leader
	I0601 19:52:44.530606       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220601194928-3412_31814002-4509-4ecb-af8a-4be39fe88031!
	I0601 19:52:44.631035       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220601194928-3412_31814002-4509-4ecb-af8a-4be39fe88031!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 19:53:48.883475    5840 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-20220601194928-3412 -n pause-20220601194928-3412
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-20220601194928-3412 -n pause-20220601194928-3412: exit status 2 (7.0161463s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220601194928-3412" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220601194928-3412
helpers_test.go:231: (dbg) Done: docker inspect pause-20220601194928-3412: (1.1337537s)
helpers_test.go:235: (dbg) docker inspect pause-20220601194928-3412:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094",
	        "Created": "2022-06-01T19:50:31.782444Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 190257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:50:33.2539211Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094/hostname",
	        "HostsPath": "/var/lib/docker/containers/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094/hosts",
	        "LogPath": "/var/lib/docker/containers/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094/18c3a3351267abdfe12c4c2605ce13259e99a9eaddffe10e939e522ec7669094-json.log",
	        "Name": "/pause-20220601194928-3412",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20220601194928-3412:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220601194928-3412",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/955efc43298e2b985cbbb66b013e5d1f10e14c34487c38aa7ea385034ab9b98a-init/diff:/var/lib/docker/overlay2/487b259deb346e6ca1e96023cfc1832638489725b45384e10e2c2effe462993c/diff:/var/lib/docker/overlay2/7830a7ee158a10893945c1b577efeb821d499cce7646d95d3c0cffb3ed372dca/diff:/var/lib/docker/overlay2/6fe83b204fd4124b69c52dc2b8620b75ac92764b58a8d1af6662ff240e517719/diff:/var/lib/docker/overlay2/6362560b46c9fab8d6514c8429f6275481f64020b6a76226333ec63d40b3509c/diff:/var/lib/docker/overlay2/b947dedac2c38cb9982c9b363e89606d658250ef2798320fdf3517f747048abd/diff:/var/lib/docker/overlay2/bc2839e6d5fd56592e9530bb7f1f81ed9502bdb7539e7f429732e9cf4cd3b17d/diff:/var/lib/docker/overlay2/1b3239e13a55e9fa626a7541842d884445974471039cc2d9226ad10f2b953536/diff:/var/lib/docker/overlay2/1884c2d81ecac540a3174fb86cefef2fd199eaa5c78d29afe6c63aff263f9584/diff:/var/lib/docker/overlay2/d1c361312180db411937b7786e1329e12f9ed7b9439d4574d6d9a237a8ef8a9e/diff:/var/lib/docker/overlay2/15125b
9e77872950f8bc77e7ec27026feb64d93311200f76586c570bbceb3810/diff:/var/lib/docker/overlay2/1778c10167346a2b58dd494e4689512b56050eed4b6df53a451f9aa373c3af35/diff:/var/lib/docker/overlay2/e45fa45d984d0fdd2eaca3b15c5e81abaa51b6b84fc051f20678d16cb6548a34/diff:/var/lib/docker/overlay2/54cea2bf354fab8e2c392a574195b06b919122ff6a1fb01b05f554ba43d9719e/diff:/var/lib/docker/overlay2/8667e3403c29f1a18aaababc226712f548d7dd623a4b9ac413520cf72955fb40/diff:/var/lib/docker/overlay2/5d20284be4fd7015d5b8eb6ae55b108a262e3c66cdaa9a8c4c23a6eb1726d4da/diff:/var/lib/docker/overlay2/d623242b443d7de7f75761cda756115d0f9df9f3b73144554928ceac06876d5b/diff:/var/lib/docker/overlay2/143dd7f527aa222e0eeaafe5e0182140c95e402aa335e7994b2aa7f1e6b6ba3c/diff:/var/lib/docker/overlay2/d690aea98cc6cb39fdd3f6660997b792085628157b14d576701adc72d3e6cf55/diff:/var/lib/docker/overlay2/2bb1d07709342e3bcb4feda7dc7d17fa9707986bf88cd7dc52eab255748276e0/diff:/var/lib/docker/overlay2/ea79e7f8097cf29c435b8a18ee6332b067ec4f7858b6eaabf897d2076a8deb3e/diff:/var/lib/d
ocker/overlay2/dab209c0bb58d228f914118438b0a79649c46857e6fcb416c0c556c049154f5d/diff:/var/lib/docker/overlay2/3bd421aaea3202bb8715cdd0f452aa411f20f2025b05d6a03811ebc7d0347896/diff:/var/lib/docker/overlay2/7dc112f5a6dc7809e579b2eaaeef54d3d5ee1326e9f35817dad641bc4e2c095a/diff:/var/lib/docker/overlay2/772b23d424621d351ce90f47e351441dc7fb224576441813bb86be52c0552022/diff:/var/lib/docker/overlay2/86ea33f163c6d58acb53a8e5bb27e1c131a6c915d7459ca03c90383b299fde58/diff:/var/lib/docker/overlay2/58deaba6fb571643d48dd090dd850eeb8fd343f41591580f4509fe61280e87de/diff:/var/lib/docker/overlay2/d8e5be8b94fe5858e777434bd7d360128719def82a5e7946fd4cb69aecab39fe/diff:/var/lib/docker/overlay2/a319e02b15899f20f933362a00fa40be829441edea2a0be36cc1e30b3417cf57/diff:/var/lib/docker/overlay2/b315efdf7f2b5f50f74664829533097f21ab8bda14478b76e9b5781079830b20/diff:/var/lib/docker/overlay2/bb96faec132eb5919c94fc772f61e63514308af6f72ec141483a94a85a77cc3b/diff:/var/lib/docker/overlay2/55dbff36528117ad96b3be9ee2396f7faee2f0b493773aa5abf5ba2b23a
5f728/diff:/var/lib/docker/overlay2/f11da52264a1f34c3b2180d2adcbcb7cc077c7f91611974bf0946d6bea248de5/diff:/var/lib/docker/overlay2/6ca19b0a8327fcd8f60b06c6b0f4519ff5f0f3eacd034e6c5c16ed45239f2238/diff:/var/lib/docker/overlay2/f86ed588a9cb5b359a174312bf8595e8e896ba3d8922b0bae1d8839518d24fb6/diff:/var/lib/docker/overlay2/0bf0e1906e62c903f71626646e2339b8e2c809d40948898d803dcaf0218ed0dd/diff:/var/lib/docker/overlay2/c8ff277ec5a9fa0db24ad64c7e0523b2b5a5c7d64f2148a0c9823fdd5bc60cad/diff:/var/lib/docker/overlay2/4cfbf9fc2a4a968773220ae74312f07a616afc80cbf9a4b68e2c2357c09ca009/diff:/var/lib/docker/overlay2/9a235e4b15bee3f10260f9356535723bf351a49b1f19af094d59a1439b7a9632/diff:/var/lib/docker/overlay2/9699d245a454ce1e21f1ac875a0910a63fb34d3d2870f163d8b6d258f33c2f4f/diff:/var/lib/docker/overlay2/6e093a9dfe282a2a53a4081251541e0c5b4176bb42d9c9bf908f19b1fdc577f5/diff:/var/lib/docker/overlay2/98036438a55a1794d298c11dc1eb0633e06ed433b84d24a3972e634a0b11deb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/955efc43298e2b985cbbb66b013e5d1f10e14c34487c38aa7ea385034ab9b98a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/955efc43298e2b985cbbb66b013e5d1f10e14c34487c38aa7ea385034ab9b98a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/955efc43298e2b985cbbb66b013e5d1f10e14c34487c38aa7ea385034ab9b98a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20220601194928-3412",
	                "Source": "/var/lib/docker/volumes/pause-20220601194928-3412/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220601194928-3412",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220601194928-3412",
	                "name.minikube.sigs.k8s.io": "pause-20220601194928-3412",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "156f723e1e9d36ef19e56832e80e0b7533826382adbb71a72695a73b1d2b7ad3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61076"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61078"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/156f723e1e9d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220601194928-3412": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "18c3a3351267",
	                        "pause-20220601194928-3412"
	                    ],
	                    "NetworkID": "236303b3a2bb67679c11e5044d40d1907ff737afd0b0490c48a6dcf0ed6cc3df",
	                    "EndpointID": "6a11984d145c8b540d3966a00ec4b85aa5a0027325a6df1dd811bb8c29609da6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220601194928-3412 -n pause-20220601194928-3412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220601194928-3412 -n pause-20220601194928-3412: exit status 2 (6.8663381s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-20220601194928-3412 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-20220601194928-3412 logs -n 25: (20.3303479s)
helpers_test.go:252: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |       User        |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|-------------------|----------------|---------------------|---------------------|
	| stop    | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:46 GMT | 01 Jun 22 19:46 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	| start   | -p                                     | missing-upgrade-20220601194025-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:44 GMT | 01 Jun 22 19:47 GMT |
	|         | missing-upgrade-20220601194025-3412    |                                        |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |                   |                |                     |                     |
	|         | -v=1 --driver=docker                   |                                        |                   |                |                     |                     |
	| start   | -p                                     | stopped-upgrade-20220601194002-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:45 GMT | 01 Jun 22 19:47 GMT |
	|         | stopped-upgrade-20220601194002-3412    |                                        |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |                   |                |                     |                     |
	|         | -v=1 --driver=docker                   |                                        |                   |                |                     |                     |
	| logs    | -p                                     | stopped-upgrade-20220601194002-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:47 GMT | 01 Jun 22 19:47 GMT |
	|         | stopped-upgrade-20220601194002-3412    |                                        |                   |                |                     |                     |
	| delete  | -p                                     | missing-upgrade-20220601194025-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:47 GMT | 01 Jun 22 19:47 GMT |
	|         | missing-upgrade-20220601194025-3412    |                                        |                   |                |                     |                     |
	| delete  | -p                                     | stopped-upgrade-20220601194002-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:47 GMT | 01 Jun 22 19:47 GMT |
	|         | stopped-upgrade-20220601194002-3412    |                                        |                   |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:46 GMT | 01 Jun 22 19:48 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	|         | --memory=2200                          |                                        |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |                   |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |                   |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220601193729-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:48 GMT | 01 Jun 22 19:49 GMT |
	|         | cert-expiration-20220601193729-3412    |                                        |                   |                |                     |                     |
	|         | --memory=2048                          |                                        |                   |                |                     |                     |
	|         | --cert-expiration=8760h                |                                        |                   |                |                     |                     |
	|         | --driver=docker                        |                                        |                   |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:48 GMT | 01 Jun 22 19:49 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	|         | --memory=2200                          |                                        |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |                   |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |                   |                |                     |                     |
	| delete  | -p                                     | cert-expiration-20220601193729-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:49 GMT | 01 Jun 22 19:49 GMT |
	|         | cert-expiration-20220601193729-3412    |                                        |                   |                |                     |                     |
	| delete  | -p                                     | kubernetes-upgrade-20220601194404-3412 | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:49 GMT | 01 Jun 22 19:49 GMT |
	|         | kubernetes-upgrade-20220601194404-3412 |                                        |                   |                |                     |                     |
	| start   | -p                                     | cert-options-20220601194744-3412       | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:47 GMT | 01 Jun 22 19:50 GMT |
	|         | cert-options-20220601194744-3412       |                                        |                   |                |                     |                     |
	|         | --memory=2048                          |                                        |                   |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                                        |                   |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                                        |                   |                |                     |                     |
	|         | --apiserver-names=localhost            |                                        |                   |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                                        |                   |                |                     |                     |
	|         | --apiserver-port=8555                  |                                        |                   |                |                     |                     |
	|         | --driver=docker                        |                                        |                   |                |                     |                     |
	|         | --apiserver-name=localhost             |                                        |                   |                |                     |                     |
	| ssh     | cert-options-20220601194744-3412       | cert-options-20220601194744-3412       | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:50 GMT | 01 Jun 22 19:50 GMT |
	|         | ssh openssl x509 -text -noout -in      |                                        |                   |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |                   |                |                     |                     |
	| ssh     | -p                                     | cert-options-20220601194744-3412       | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:50 GMT | 01 Jun 22 19:50 GMT |
	|         | cert-options-20220601194744-3412       |                                        |                   |                |                     |                     |
	|         | -- sudo cat                            |                                        |                   |                |                     |                     |
	|         | /etc/kubernetes/admin.conf             |                                        |                   |                |                     |                     |
	| delete  | -p                                     | cert-options-20220601194744-3412       | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:50 GMT | 01 Jun 22 19:50 GMT |
	|         | cert-options-20220601194744-3412       |                                        |                   |                |                     |                     |
	| start   | -p pause-20220601194928-3412           | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:49 GMT | 01 Jun 22 19:52 GMT |
	|         | --memory=2048                          |                                        |                   |                |                     |                     |
	|         | --install-addons=false                 |                                        |                   |                |                     |                     |
	|         | --wait=all --driver=docker             |                                        |                   |                |                     |                     |
	| start   | -p auto-20220601193434-3412            | auto-20220601193434-3412               | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:49 GMT | 01 Jun 22 19:52 GMT |
	|         | --memory=2048                          |                                        |                   |                |                     |                     |
	|         | --alsologtostderr                      |                                        |                   |                |                     |                     |
	|         | --wait=true --wait-timeout=5m          |                                        |                   |                |                     |                     |
	|         | --driver=docker                        |                                        |                   |                |                     |                     |
	| ssh     | -p auto-20220601193434-3412            | auto-20220601193434-3412               | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:52 GMT |
	|         | pgrep -a kubelet                       |                                        |                   |                |                     |                     |
	| start   | -p pause-20220601194928-3412           | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:52 GMT |
	|         | --alsologtostderr -v=1                 |                                        |                   |                |                     |                     |
	|         | --driver=docker                        |                                        |                   |                |                     |                     |
	| start   | -p                                     | running-upgrade-20220601194733-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:51 GMT | 01 Jun 22 19:52 GMT |
	|         | running-upgrade-20220601194733-3412    |                                        |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |                   |                |                     |                     |
	|         | -v=1 --driver=docker                   |                                        |                   |                |                     |                     |
	| pause   | -p pause-20220601194928-3412           | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:52 GMT |
	|         | --alsologtostderr -v=5                 |                                        |                   |                |                     |                     |
	| unpause | -p pause-20220601194928-3412           | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:53 GMT |
	|         | --alsologtostderr -v=5                 |                                        |                   |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220601194733-3412    | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:53 GMT |
	|         | running-upgrade-20220601194733-3412    |                                        |                   |                |                     |                     |
	| delete  | -p auto-20220601193434-3412            | auto-20220601193434-3412               | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:52 GMT | 01 Jun 22 19:53 GMT |
	| logs    | pause-20220601194928-3412 logs         | pause-20220601194928-3412              | minikube4\jenkins | v1.26.0-beta.1 | 01 Jun 22 19:53 GMT | 01 Jun 22 19:53 GMT |
	|         | -n 25                                  |                                        |                   |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* I0601 19:53:24.281895    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:26.285103    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	Log file created at: 2022/06/01 19:53:28
	Running on machine: minikube4
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 19:53:28.486890   11184 out.go:296] Setting OutFile to fd 1752 ...
	I0601 19:53:28.550876   11184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:53:28.550876   11184 out.go:309] Setting ErrFile to fd 1764...
	I0601 19:53:28.550876   11184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:53:28.562881   11184 out.go:303] Setting JSON to false
	I0601 19:53:28.565889   11184 start.go:115] hostinfo: {"hostname":"minikube4","uptime":73323,"bootTime":1654039885,"procs":172,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 19:53:28.565889   11184 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 19:53:28.571877   11184 out.go:177] * [false-20220601193442-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 19:53:28.575905   11184 notify.go:193] Checking for updates...
	I0601 19:53:28.581916   11184 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 19:53:28.587900   11184 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 19:53:28.594908   11184 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 19:53:28.599896   11184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0601 19:53:28.023896    9176 cli_runner.go:211] docker network inspect calico-20220601193451-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 19:53:28.023896    9176 cli_runner.go:217] Completed: docker network inspect calico-20220601193451-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2532002s)
	I0601 19:53:28.030904    9176 network_create.go:272] running [docker network inspect calico-20220601193451-3412] to gather additional debugging logs...
	I0601 19:53:28.030904    9176 cli_runner.go:164] Run: docker network inspect calico-20220601193451-3412
	W0601 19:53:29.216451    9176 cli_runner.go:211] docker network inspect calico-20220601193451-3412 returned with exit code 1
	I0601 19:53:29.216451    9176 cli_runner.go:217] Completed: docker network inspect calico-20220601193451-3412: (1.1854849s)
	I0601 19:53:29.216451    9176 network_create.go:275] error running [docker network inspect calico-20220601193451-3412]: docker network inspect calico-20220601193451-3412: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220601193451-3412
	I0601 19:53:29.216451    9176 network_create.go:277] output of [docker network inspect calico-20220601193451-3412]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220601193451-3412
	
	** /stderr **
	I0601 19:53:29.223459    9176 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 19:53:30.373290    9176 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1486207s)
	I0601 19:53:30.398443    9176 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006ed0] misses:0}
	I0601 19:53:30.398443    9176 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:30.398443    9176 network_create.go:115] attempt to create docker network calico-20220601193451-3412 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 19:53:30.406056    9176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412
	W0601 19:53:31.564118    9176 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412 returned with exit code 1
	I0601 19:53:31.564118    9176 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412: (1.158002s)
	W0601 19:53:31.564118    9176 network_create.go:107] failed to create docker network calico-20220601193451-3412 192.168.49.0/24, will retry: subnet is taken
	I0601 19:53:31.584133    9176 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006ed0] amended:false}} dirty:map[] misses:0}
	I0601 19:53:31.584133    9176 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:31.605132    9176 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006ed0] amended:true}} dirty:map[192.168.49.0:0xc000006ed0 192.168.58.0:0xc0005b8688] misses:0}
	I0601 19:53:31.605132    9176 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:31.605132    9176 network_create.go:115] attempt to create docker network calico-20220601193451-3412 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 19:53:31.613126    9176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412
	I0601 19:53:28.782581    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:30.784224    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:33.133428    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:28.603879   11184 config.go:178] Loaded profile config "calico-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:53:28.603879   11184 config.go:178] Loaded profile config "cilium-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:53:28.603879   11184 config.go:178] Loaded profile config "pause-20220601194928-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:53:28.604887   11184 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 19:53:31.440117   11184 docker.go:137] docker version: linux-20.10.14
	I0601 19:53:31.448763   11184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:53:33.681806   11184 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2329274s)
	I0601 19:53:33.682806   11184 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-06-01 19:53:32.5522452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:53:33.685818   11184 out.go:177] * Using the docker driver based on user configuration
	I0601 19:53:33.688819   11184 start.go:284] selected driver: docker
	I0601 19:53:33.688819   11184 start.go:806] validating driver "docker" against <nil>
	I0601 19:53:33.688819   11184 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 19:53:33.761406   11184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:53:36.006782   11184 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2452604s)
	I0601 19:53:36.006782   11184 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-06-01 19:53:34.8571698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:53:36.006782   11184 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 19:53:36.007789   11184 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 19:53:36.012781   11184 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 19:53:36.015785   11184 cni.go:95] Creating CNI manager for "false"
	I0601 19:53:36.015785   11184 start_flags.go:306] config:
	{Name:false-20220601193442-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220601193442-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 19:53:36.020789   11184 out.go:177] * Starting control plane node false-20220601193442-3412 in cluster false-20220601193442-3412
	I0601 19:53:36.023841   11184 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 19:53:36.027929   11184 out.go:177] * Pulling base image ...
	I0601 19:53:32.902988    9176 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412: (1.2897117s)
	I0601 19:53:32.902988    9176 network_create.go:99] docker network calico-20220601193451-3412 192.168.58.0/24 created
	I0601 19:53:32.902988    9176 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20220601193451-3412" container
	I0601 19:53:32.918221    9176 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 19:53:34.045134    9176 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1267999s)
	I0601 19:53:34.052300    9176 cli_runner.go:164] Run: docker volume create calico-20220601193451-3412 --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true
	I0601 19:53:35.238060    9176 cli_runner.go:217] Completed: docker volume create calico-20220601193451-3412 --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true: (1.1855506s)
	I0601 19:53:35.238060    9176 oci.go:103] Successfully created a docker volume calico-20220601193451-3412
	I0601 19:53:35.245553    9176 cli_runner.go:164] Run: docker run --rm --name calico-20220601193451-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --entrypoint /usr/bin/test -v calico-20220601193451-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 19:53:36.031781   11184 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:53:36.031781   11184 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 19:53:36.031781   11184 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 19:53:36.031781   11184 cache.go:57] Caching tarball of preloaded images
	I0601 19:53:36.031781   11184 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 19:53:36.032790   11184 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 19:53:36.032790   11184 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\config.json ...
	I0601 19:53:36.032790   11184 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\config.json: {Name:mke37f352a98f26dd7e1228f51832cbc170c07da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:53:37.233833   11184 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 19:53:37.233962   11184 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 19:53:37.233962   11184 cache.go:206] Successfully downloaded all kic artifacts
	I0601 19:53:37.233962   11184 start.go:352] acquiring machines lock for false-20220601193442-3412: {Name:mk481b213e183ba859ca07fa5fa335455de0e416 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 19:53:37.233962   11184 start.go:356] acquired machines lock for "false-20220601193442-3412" in 0s
	I0601 19:53:37.233962   11184 start.go:91] Provisioning new machine with config: &{Name:false-20220601193442-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220601193442-3412 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 19:53:37.234630   11184 start.go:131] createHost starting for "" (driver="docker")
	I0601 19:53:35.191839    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:37.279346    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:37.238479   11184 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 19:53:37.238900   11184 start.go:165] libmachine.API.Create for "false-20220601193442-3412" (driver="docker")
	I0601 19:53:37.238988   11184 client.go:168] LocalClient.Create starting
	I0601 19:53:37.239266   11184 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0601 19:53:37.239266   11184 main.go:134] libmachine: Decoding PEM data...
	I0601 19:53:37.239266   11184 main.go:134] libmachine: Parsing certificate...
	I0601 19:53:37.239948   11184 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0601 19:53:37.239948   11184 main.go:134] libmachine: Decoding PEM data...
	I0601 19:53:37.239948   11184 main.go:134] libmachine: Parsing certificate...
	I0601 19:53:37.248337   11184 cli_runner.go:164] Run: docker network inspect false-20220601193442-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 19:53:38.400565   11184 cli_runner.go:211] docker network inspect false-20220601193442-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 19:53:38.400565   11184 cli_runner.go:217] Completed: docker network inspect false-20220601193442-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1521685s)
	I0601 19:53:38.406579   11184 network_create.go:272] running [docker network inspect false-20220601193442-3412] to gather additional debugging logs...
	I0601 19:53:38.407570   11184 cli_runner.go:164] Run: docker network inspect false-20220601193442-3412
	I0601 19:53:38.227234    9176 cli_runner.go:217] Completed: docker run --rm --name calico-20220601193451-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --entrypoint /usr/bin/test -v calico-20220601193451-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib: (2.9814934s)
	I0601 19:53:38.227303    9176 oci.go:107] Successfully prepared a docker volume calico-20220601193451-3412
	I0601 19:53:38.227334    9176 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:53:38.227386    9176 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 19:53:38.235022    9176 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220601193451-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 19:53:39.791575    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:42.195862    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	W0601 19:53:39.546985   11184 cli_runner.go:211] docker network inspect false-20220601193442-3412 returned with exit code 1
	I0601 19:53:39.547190   11184 cli_runner.go:217] Completed: docker network inspect false-20220601193442-3412: (1.1393333s)
	I0601 19:53:39.547190   11184 network_create.go:275] error running [docker network inspect false-20220601193442-3412]: docker network inspect false-20220601193442-3412: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220601193442-3412
	I0601 19:53:39.547190   11184 network_create.go:277] output of [docker network inspect false-20220601193442-3412]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220601193442-3412
	
	** /stderr **
	I0601 19:53:39.555225   11184 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 19:53:40.725671   11184 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1702477s)
	I0601 19:53:40.745808   11184 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c0c420] misses:0}
	I0601 19:53:40.746188   11184 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:40.746188   11184 network_create.go:115] attempt to create docker network false-20220601193442-3412 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 19:53:40.753438   11184 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412
	W0601 19:53:41.863360   11184 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412 returned with exit code 1
	I0601 19:53:41.863360   11184 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412: (1.1098646s)
	W0601 19:53:41.863360   11184 network_create.go:107] failed to create docker network false-20220601193442-3412 192.168.49.0/24, will retry: subnet is taken
	I0601 19:53:41.888214   11184 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c0c420] amended:false}} dirty:map[] misses:0}
	I0601 19:53:41.888322   11184 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:41.911508   11184 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c0c420] amended:true}} dirty:map[192.168.49.0:0xc000c0c420 192.168.58.0:0xc000402360] misses:0}
	I0601 19:53:41.912499   11184 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:41.912499   11184 network_create.go:115] attempt to create docker network false-20220601193442-3412 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 19:53:41.920492   11184 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412
	W0601 19:53:42.998158   11184 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412 returned with exit code 1
	I0601 19:53:42.998158   11184 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412: (1.0776105s)
	W0601 19:53:42.998158   11184 network_create.go:107] failed to create docker network false-20220601193442-3412 192.168.58.0/24, will retry: subnet is taken
	I0601 19:53:43.019153   11184 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c0c420] amended:true}} dirty:map[192.168.49.0:0xc000c0c420 192.168.58.0:0xc000402360] misses:1}
	I0601 19:53:43.019153   11184 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:43.036160   11184 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c0c420] amended:true}} dirty:map[192.168.49.0:0xc000c0c420 192.168.58.0:0xc000402360 192.168.67.0:0xc000c0c4b8] misses:1}
	I0601 19:53:43.037111   11184 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:43.037111   11184 network_create.go:115] attempt to create docker network false-20220601193442-3412 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0601 19:53:43.045370   11184 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412
	I0601 19:53:45.125333    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:47.193972    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	W0601 19:53:44.118636   11184 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412 returned with exit code 1
	I0601 19:53:44.118636   11184 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412: (1.0732102s)
	W0601 19:53:44.118636   11184 network_create.go:107] failed to create docker network false-20220601193442-3412 192.168.67.0/24, will retry: subnet is taken
	I0601 19:53:44.138626   11184 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c0c420] amended:true}} dirty:map[192.168.49.0:0xc000c0c420 192.168.58.0:0xc000402360 192.168.67.0:0xc000c0c4b8] misses:2}
	I0601 19:53:44.138626   11184 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:44.159609   11184 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c0c420] amended:true}} dirty:map[192.168.49.0:0xc000c0c420 192.168.58.0:0xc000402360 192.168.67.0:0xc000c0c4b8 192.168.76.0:0xc00012a5e0] misses:2}
	I0601 19:53:44.159609   11184 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:44.159609   11184 network_create.go:115] attempt to create docker network false-20220601193442-3412 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0601 19:53:44.166604   11184 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412
	I0601 19:53:46.187681   11184 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601193442-3412: (2.0209725s)
	I0601 19:53:46.187681   11184 network_create.go:99] docker network false-20220601193442-3412 192.168.76.0/24 created
	I0601 19:53:46.187681   11184 kic.go:106] calculated static IP "192.168.76.2" for the "false-20220601193442-3412" container
	I0601 19:53:46.202680   11184 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 19:53:47.340713   11184 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1379744s)
	I0601 19:53:47.346714   11184 cli_runner.go:164] Run: docker volume create false-20220601193442-3412 --label name.minikube.sigs.k8s.io=false-20220601193442-3412 --label created_by.minikube.sigs.k8s.io=true
	I0601 19:53:48.481187   11184 cli_runner.go:217] Completed: docker volume create false-20220601193442-3412 --label name.minikube.sigs.k8s.io=false-20220601193442-3412 --label created_by.minikube.sigs.k8s.io=true: (1.1334032s)
	I0601 19:53:48.481187   11184 oci.go:103] Successfully created a docker volume false-20220601193442-3412
	I0601 19:53:48.493198   11184 cli_runner.go:164] Run: docker run --rm --name false-20220601193442-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220601193442-3412 --entrypoint /usr/bin/test -v false-20220601193442-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 19:53:49.697143    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:51.897760    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:54.202425    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:56.698306    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:53:56.264235   11184 cli_runner.go:217] Completed: docker run --rm --name false-20220601193442-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220601193442-3412 --entrypoint /usr/bin/test -v false-20220601193442-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib: (7.7706348s)
	I0601 19:53:56.264235   11184 oci.go:107] Successfully prepared a docker volume false-20220601193442-3412
	I0601 19:53:56.264235   11184 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:53:56.264235   11184 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 19:53:56.272244   11184 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220601193442-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 19:53:59.033254    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:02.046039    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:04.088576    9176 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220601193451-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (25.8522145s)
	I0601 19:54:04.088576    9176 kic.go:188] duration metric: took 25.859850 seconds to extract preloaded images to volume
	I0601 19:54:04.096562    9176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:54:06.274144    9176 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1774692s)
	I0601 19:54:06.274144    9176 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:82 OomKillDisable:true NGoroutines:70 SystemTime:2022-06-01 19:54:05.1748376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:54:06.281148    9176 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 19:54:04.189144    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	I0601 19:54:06.723252    6740 pod_ready.go:102] pod "cilium-72kgq" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 19:50:34 UTC, end at Wed 2022-06-01 19:54:15 UTC. --
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.309051400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.309131000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.309176600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.312181900Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.312315500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.312452700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.312515900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.335315200Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.353908800Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354015500Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354032500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354048100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354056000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354063400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.354420200Z" level=info msg="Loading containers: start."
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.593425100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.693203600Z" level=info msg="Loading containers: done."
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.718021800Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.718175100Z" level=info msg="Daemon has completed initialization"
	Jun 01 19:50:57 pause-20220601194928-3412 systemd[1]: Started Docker Application Container Engine.
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.788665400Z" level=info msg="API listen on [::]:2376"
	Jun 01 19:50:57 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:50:57.794543200Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 19:52:05 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:52:05.131083900Z" level=info msg="ignoring event" container=73b4e698b498ee66d9377f22adbac1b577f1b75a7fe208c9258dea44477457e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:52:05 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:52:05.393811800Z" level=info msg="ignoring event" container=17383e44086b7bf7da720381011c707ded900769814831bbf167cb77b922bed7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:53:18 pause-20220601194928-3412 dockerd[510]: time="2022-06-01T19:53:18.222568500Z" level=error msg="Handler for POST /v1.41/containers/723ae8a7be41/pause returned error: Cannot pause container 723ae8a7be4143203ce3e98a2242812da835b79477739276946430d7126424b7: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* time="2022-06-01T19:54:17Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS                       PORTS     NAMES
	f19367b3a8f4   6e38f40d628d           "/storage-provisioner"   About a minute ago   Up About a minute (Paused)             k8s_storage-provisioner_storage-provisioner_kube-system_9841ef75-f6f7-4546-8b4e-2aadf5699c2b_0
	d05d6272ca57   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_storage-provisioner_kube-system_9841ef75-f6f7-4546-8b4e-2aadf5699c2b_0
	746b0665f137   a4ca41631cc7           "/coredns -conf /etc…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_coredns_coredns-64897985d-7n8j8_kube-system_9256be30-fb9d-40ab-867e-89615489d771_0
	f1aa86d0d8db   4c0375452406           "/usr/local/bin/kube…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-proxy_kube-proxy-5zrp5_kube-system_407893ee-127b-4049-b043-da518058f009_0
	bff3c609c584   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_coredns-64897985d-7n8j8_kube-system_9256be30-fb9d-40ab-867e-89615489d771_0
	fbb3a2020c77   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-proxy-5zrp5_kube-system_407893ee-127b-4049-b043-da518058f009_0
	723ae8a7be41   25f8c7f3da61           "etcd --advertise-cl…"   2 minutes ago        Up 2 minutes                           k8s_etcd_etcd-pause-20220601194928-3412_kube-system_e6809dca5ea4d80a1c02803b0a98b488_0
	40b58d79d3b2   df7b72818ad2           "kube-controller-man…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-controller-manager_kube-controller-manager-pause-20220601194928-3412_kube-system_4679fba103d87cd475bcbad3d12eacc5_0
	7bdc3ac58272   595f327f224a           "kube-scheduler --au…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-scheduler_kube-scheduler-pause-20220601194928-3412_kube-system_42764eeb51bfab545d2537d74337e71c_0
	6a1f8d9c25e7   8fa62c12256d           "kube-apiserver --ad…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-apiserver_kube-apiserver-pause-20220601194928-3412_kube-system_87e851f4b00765eb831c0ab86bae4ace_0
	9f1178964140   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-controller-manager-pause-20220601194928-3412_kube-system_4679fba103d87cd475bcbad3d12eacc5_0
	988ce573a0b0   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_etcd-pause-20220601194928-3412_kube-system_e6809dca5ea4d80a1c02803b0a98b488_0
	81893bc03c3b   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-scheduler-pause-20220601194928-3412_kube-system_42764eeb51bfab545d2537d74337e71c_0
	3d0fdf412d9a   k8s.gcr.io/pause:3.6   "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-apiserver-pause-20220601194928-3412_kube-system_87e851f4b00765eb831c0ab86bae4ace_0
	
	* 
	* ==> coredns [746b0665f137] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jun 1 19:32] WSL2: Performing memory compaction.
	[Jun 1 19:33] WSL2: Performing memory compaction.
	[Jun 1 19:34] WSL2: Performing memory compaction.
	[Jun 1 19:35] WSL2: Performing memory compaction.
	[Jun 1 19:37] WSL2: Performing memory compaction.
	[Jun 1 19:38] WSL2: Performing memory compaction.
	[Jun 1 19:39] WSL2: Performing memory compaction.
	[Jun 1 19:41] WSL2: Performing memory compaction.
	[ +31.599109] process 'docker/tmp/qemu-check383814209/check' started with executable stack
	[Jun 1 19:42] WSL2: Performing memory compaction.
	[Jun 1 19:43] WSL2: Performing memory compaction.
	[Jun 1 19:47] WSL2: Performing memory compaction.
	[Jun 1 19:49] WSL2: Performing memory compaction.
	[Jun 1 19:50] WSL2: Performing memory compaction.
	[Jun 1 19:51] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.006260] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010917] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000002] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000003] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun 1 19:53] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [723ae8a7be41] <==
	* {"level":"info","ts":"2022-06-01T19:53:15.318Z","caller":"traceutil/trace.go:171","msg":"trace[1077161459] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:543; }","duration":"591.8406ms","start":"2022-06-01T19:53:14.726Z","end":"2022-06-01T19:53:15.318Z","steps":["trace[1077161459] 'agreement among raft nodes before linearized reading'  (duration: 558.9343ms)","trace[1077161459] 'count revisions from in-memory index tree'  (duration: 32.5684ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T19:53:15.318Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:14.726Z","time spent":"591.9183ms","remote":"127.0.0.1:36762","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":29,"request content":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true "}
	{"level":"warn","ts":"2022-06-01T19:53:15.318Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"627.9297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3582"}
	{"level":"info","ts":"2022-06-01T19:53:15.318Z","caller":"traceutil/trace.go:171","msg":"trace[2020903619] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:543; }","duration":"628.5132ms","start":"2022-06-01T19:53:14.690Z","end":"2022-06-01T19:53:15.318Z","steps":["trace[2020903619] 'agreement among raft nodes before linearized reading'  (duration: 595.1702ms)","trace[2020903619] 'range keys from in-memory index tree'  (duration: 32.714ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T19:53:15.319Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:14.690Z","time spent":"628.6447ms","remote":"127.0.0.1:36678","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":3606,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2022-06-01T19:53:15.846Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:16.347Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:16.848Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:17.350Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:17.576Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.230288s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-7n8j8\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2022-06-01T19:53:17.577Z","caller":"traceutil/trace.go:171","msg":"trace[297424335] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-7n8j8; range_end:; }","duration":"2.2308765s","start":"2022-06-01T19:53:15.345Z","end":"2022-06-01T19:53:17.576Z","steps":["trace[297424335] 'agreement among raft nodes before linearized reading'  (duration: 2.2302121s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:53:17.577Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:15.345Z","time spent":"2.2313245s","remote":"127.0.0.1:36678","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/pods/kube-system/coredns-64897985d-7n8j8\" "}
	WARNING: 2022/06/01 19:53:17 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-06-01T19:53:17.873Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:18.374Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:18.874Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:19.375Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128013405470281663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-06-01T19:53:19.544Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"4.2028531s","expected-duration":"1s"}
	{"level":"info","ts":"2022-06-01T19:53:19.544Z","caller":"traceutil/trace.go:171","msg":"trace[1986365679] linearizableReadLoop","detail":"{readStateIndex:577; appliedIndex:577; }","duration":"4.1989464s","start":"2022-06-01T19:53:15.345Z","end":"2022-06-01T19:53:19.544Z","steps":["trace[1986365679] 'read index received'  (duration: 4.1989009s)","trace[1986365679] 'applied index is now lower than readState.Index'  (duration: 41.3µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T19:53:19.548Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.1264244s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1127"}
	{"level":"warn","ts":"2022-06-01T19:53:19.548Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"4.0660694s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"}
	{"level":"info","ts":"2022-06-01T19:53:19.548Z","caller":"traceutil/trace.go:171","msg":"trace[1379093940] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:544; }","duration":"3.1268307s","start":"2022-06-01T19:53:16.421Z","end":"2022-06-01T19:53:19.548Z","steps":["trace[1379093940] 'agreement among raft nodes before linearized reading'  (duration: 3.1237342s)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T19:53:19.548Z","caller":"traceutil/trace.go:171","msg":"trace[375780806] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:544; }","duration":"4.0662104s","start":"2022-06-01T19:53:15.482Z","end":"2022-06-01T19:53:19.548Z","steps":["trace[375780806] 'agreement among raft nodes before linearized reading'  (duration: 4.0628643s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:53:19.548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:16.421Z","time spent":"3.1269424s","remote":"127.0.0.1:36672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1151,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2022-06-01T19:53:19.548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T19:53:15.482Z","time spent":"4.0663206s","remote":"127.0.0.1:36670","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":367,"request content":"key:\"/registry/namespaces/default\" "}
	
	* 
	* ==> kernel <==
	*  19:54:28 up  2:16,  0 users,  load average: 6.75, 6.79, 4.64
	Linux pause-20220601194928-3412 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [6a1f8d9c25e7] <==
	* I0601 19:51:38.002633       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 19:51:40.278861       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 19:51:40.386106       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 19:51:40.407075       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 19:51:41.571710       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 19:51:52.805503       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 19:51:52.808864       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 19:51:57.325312       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 19:53:10.248057       1 trace.go:205] Trace[89863760]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:6615ff52-a992-40b8-8e39-cfa59bac3b46,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 19:53:05.703) (total time: 4542ms):
	Trace[89863760]: ---"About to write a response" 4541ms (19:53:10.247)
	Trace[89863760]: [4.5420569s] [4.5420569s] END
	I0601 19:53:10.248223       1 trace.go:205] Trace[1228988290]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:737f44b8-1816-4125-bec9-24dd1c0a4407,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (01-Jun-2022 19:53:05.587) (total time: 4657ms):
	Trace[1228988290]: ---"About to write a response" 4657ms (19:53:10.247)
	Trace[1228988290]: [4.6578647s] [4.6578647s] END
	I0601 19:53:15.320241       1 trace.go:205] Trace[342865520]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:a7f95376-4876-41a6-92ff-8e3d4685ec65,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Jun-2022 19:53:14.687) (total time: 632ms):
	Trace[342865520]: ---"About to write a response" 632ms (19:53:15.319)
	Trace[342865520]: [632.8275ms] [632.8275ms] END
	{"level":"warn","ts":"2022-06-01T19:53:17.573Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00068d880/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0601 19:53:17.574970       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0601 19:53:17.575880       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0601 19:53:17.577212       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0601 19:53:17.578720       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0601 19:53:17.580430       1 trace.go:205] Trace[1291614787]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-64897985d-7n8j8,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:2c33d721-0b90-408c-b829-8932696a4c43,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Jun-2022 19:53:15.345) (total time: 2235ms):
	Trace[1291614787]: [2.2353835s] [2.2353835s] END
	E0601 19:53:17.583773       1 timeout.go:141] post-timeout activity - time-elapsed: 9.5089ms, GET "/api/v1/namespaces/kube-system/pods/coredns-64897985d-7n8j8" result: <nil>
	
	* 
	* ==> kube-controller-manager [40b58d79d3b2] <==
	* I0601 19:51:52.284756       1 range_allocator.go:173] Starting range CIDR allocator
	I0601 19:51:52.284774       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0601 19:51:52.284816       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0601 19:51:52.285586       1 shared_informer.go:247] Caches are synced for GC 
	I0601 19:51:52.374158       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 19:51:52.375051       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 19:51:52.375384       1 disruption.go:371] Sending events to api server.
	I0601 19:51:52.375259       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:51:52.377826       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:51:52.378373       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 19:51:52.499240       1 event.go:294] "Event occurred" object="kube-system/etcd-pause-20220601194928-3412" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 19:51:52.499254       1 range_allocator.go:374] Set node pause-20220601194928-3412 PodCIDR to [10.244.0.0/24]
	I0601 19:51:52.674605       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-pause-20220601194928-3412" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 19:51:52.676506       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-pause-20220601194928-3412" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 19:51:52.676539       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-pause-20220601194928-3412" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 19:51:52.792800       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:51:52.868927       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:51:52.868964       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 19:51:52.888345       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 19:51:53.069897       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5zrp5"
	I0601 19:51:53.383009       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-w4nzs"
	I0601 19:51:53.393823       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-7n8j8"
	I0601 19:51:53.782934       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 19:51:53.796418       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-w4nzs"
	I0601 19:51:57.272822       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [f1aa86d0d8db] <==
	* E0601 19:51:56.908713       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0601 19:51:56.913327       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0601 19:51:56.971538       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0601 19:51:56.975259       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0601 19:51:56.978747       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0601 19:51:56.982135       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0601 19:51:57.083988       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 19:51:57.084062       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 19:51:57.084119       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:51:57.316337       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:51:57.316449       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:51:57.316462       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:51:57.316487       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:51:57.320759       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:51:57.321686       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:51:57.321824       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:51:57.322069       1 config.go:317] "Starting service config controller"
	I0601 19:51:57.322089       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:51:57.470035       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 19:51:57.470036       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [7bdc3ac58272] <==
	* W0601 19:51:34.481172       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:51:34.481283       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 19:51:34.488738       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 19:51:34.488843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 19:51:34.505925       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 19:51:34.506073       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 19:51:34.548412       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 19:51:34.548575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 19:51:34.671236       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 19:51:34.671354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 19:51:34.708713       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:51:34.708886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 19:51:34.719519       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 19:51:34.719635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 19:51:34.769874       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:51:34.770104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:51:36.070305       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 19:51:36.070512       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:51:36.148669       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:51:36.148791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 19:51:36.158656       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 19:51:36.158803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 19:51:36.225793       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 19:51:36.225908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0601 19:51:40.996488       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 19:50:34 UTC, end at Wed 2022-06-01 19:54:28 UTC. --
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod83e593fc-f274-441f-bf87-fabed1026c3c/02dfa67b52ce3f4b14fd33a413c6af320d1ba7cc3c2b0b2079d05554c77ec615: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod1a73b20778048cd7301f3a2e09e18ac9/108c605be7622a53fa310f8de81dbb080d1a4ea352656d14c07fd5cb6153678d: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod9e3eeab8-02f8-479f-a2ba-a3331060893e/3aca3289aa7d621ae364a025824538c36d17fedd012b7e1c6ccb86bcada1a02f: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod83e593fc-f274-441f-bf87-fabed1026c3c/02dfa67b52ce3f4b14fd33a413c6af320d1ba7cc3c2b0b2079d05554c77ec615: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod96f6123b-0375-4783-a3c1-5eb5492e2274/d32bfba3613c906952c9929996edde1a3ec32306c3a0356389cac2597a455ad7: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod1a73b20778048cd7301f3a2e09e18ac9/108c605be7622a53fa310f8de81dbb080d1a4ea352656d14c07fd5cb6153678d: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872435    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod83e593fc-f274-441f-bf87-fabed1026c3c] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod83e593fc-f274-441f-bf87-fabed1026c3c] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod83e593fc-f274-441f-bf87-fabed1026c3c]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872479    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod1a73b20778048cd7301f3a2e09e18ac9] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1a73b20778048cd7301f3a2e09e18ac9] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod1a73b20778048cd7301f3a2e09e18ac9]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872478    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod96f6123b-0375-4783-a3c1-5eb5492e2274] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod96f6123b-0375-4783-a3c1-5eb5492e2274] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod96f6123b-0375-4783-a3c1-5eb5492e2274]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod7ee49731-86a3-4152-8fad-6ab9c8ee36e1/f99b758f60f4bba23d4a6c19171868150b974d8db062e878e28d4c5fcd241b3e: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872818    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod7ee49731-86a3-4152-8fad-6ab9c8ee36e1] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod7ee49731-86a3-4152-8fad-6ab9c8ee36e1] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod7ee49731-86a3-4152-8fad-6ab9c8ee36e1]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod8ad7f5d9bf658e62a6dc5768e6dddc63/4ce8d4c5d3747e160349a3b42b715c73e36ff5d52e105dc8ffa7a0f8f05ffb7a: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/poda0e38c0b-c6b2-481c-a30c-59149b6df031/641447dc985ff54b52236bd6718c4bc5369e3a8f8d9179e66346931c0b259fa7: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872919    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod8ad7f5d9bf658e62a6dc5768e6dddc63] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod8ad7f5d9bf658e62a6dc5768e6dddc63] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod8ad7f5d9bf658e62a6dc5768e6dddc63]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.872958    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort poda0e38c0b-c6b2-481c-a30c-59149b6df031] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda0e38c0b-c6b2-481c-a30c-59149b6df031] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/poda0e38c0b-c6b2-481c-a30c-59149b6df031]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod7e80c9371e0031ad77b595f19cd7dead/76314aa4d16d0cd9ffe03cc22b7b679d83cac154b1f10d1343459c8c1e735dbf: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod9e3eeab8-02f8-479f-a2ba-a3331060893e/3aca3289aa7d621ae364a025824538c36d17fedd012b7e1c6ccb86bcada1a02f: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.873061    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod7e80c9371e0031ad77b595f19cd7dead] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod7e80c9371e0031ad77b595f19cd7dead] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod7e80c9371e0031ad77b595f19cd7dead]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: time="2022-06-01T19:53:15Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podb7baff4b11c83009c03ac4e10ad08172/78a06b8322d2ca8ed78fda46f898af8db5aea3f5dbcd8b2e3be0c8bba52aab8b: device or resource busy"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.873079    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod9e3eeab8-02f8-479f-a2ba-a3331060893e] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod9e3eeab8-02f8-479f-a2ba-a3331060893e] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod9e3eeab8-02f8-479f-a2ba-a3331060893e]"
	Jun 01 19:53:15 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:15.873166    4120 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podb7baff4b11c83009c03ac4e10ad08172] err="unable to destroy cgroup paths for cgroup [kubepods burstable podb7baff4b11c83009c03ac4e10ad08172] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podb7baff4b11c83009c03ac4e10ad08172]"
	Jun 01 19:53:17 pause-20220601194928-3412 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 01 19:53:17 pause-20220601194928-3412 kubelet[4120]: I0601 19:53:17.449635    4120 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 01 19:53:17 pause-20220601194928-3412 systemd[1]: kubelet.service: Succeeded.
	Jun 01 19:53:17 pause-20220601194928-3412 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f19367b3a8f4] <==
	* I0601 19:52:44.461736       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 19:52:44.499175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 19:52:44.499429       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 19:52:44.529956       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 19:52:44.530598       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25461831-d366-436c-9a64-a8a6c6e16c64", APIVersion:"v1", ResourceVersion:"517", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220601194928-3412_31814002-4509-4ecb-af8a-4be39fe88031 became leader
	I0601 19:52:44.530606       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220601194928-3412_31814002-4509-4ecb-af8a-4be39fe88031!
	I0601 19:52:44.631035       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220601194928-3412_31814002-4509-4ecb-af8a-4be39fe88031!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 19:54:28.239300   10008 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-20220601194928-3412 -n pause-20220601194928-3412
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-20220601194928-3412 -n pause-20220601194928-3412: exit status 2 (7.1049439s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220601194928-3412" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/PauseAgain (85.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (612.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220601193451-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220601193451-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (10m11.9030496s)

                                                
                                                
-- stdout --
	* [calico-20220601193451-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node calico-20220601193451-3412 in cluster calico-20220601193451-3412
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 19:53:16.833611    9176 out.go:296] Setting OutFile to fd 1812 ...
	I0601 19:53:16.903025    9176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:53:16.903025    9176 out.go:309] Setting ErrFile to fd 1664...
	I0601 19:53:16.903025    9176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:53:16.916026    9176 out.go:303] Setting JSON to false
	I0601 19:53:16.920017    9176 start.go:115] hostinfo: {"hostname":"minikube4","uptime":73311,"bootTime":1654039885,"procs":172,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 19:53:16.925800    9176 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 19:53:16.935516    9176 out.go:177] * [calico-20220601193451-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 19:53:16.940510    9176 notify.go:193] Checking for updates...
	I0601 19:53:16.945514    9176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 19:53:16.950511    9176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 19:53:16.955361    9176 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 19:53:16.960842    9176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 19:53:16.965668    9176 config.go:178] Loaded profile config "auto-20220601193434-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:53:16.965668    9176 config.go:178] Loaded profile config "cilium-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:53:16.966676    9176 config.go:178] Loaded profile config "pause-20220601194928-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:53:16.966824    9176 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 19:53:19.697404    9176 docker.go:137] docker version: linux-20.10.14
	I0601 19:53:19.705411    9176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:53:22.354008    9176 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.6484597s)
	I0601 19:53:22.354995    9176 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:60 SystemTime:2022-06-01 19:53:20.9899863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:53:22.357995    9176 out.go:177] * Using the docker driver based on user configuration
	I0601 19:53:22.364015    9176 start.go:284] selected driver: docker
	I0601 19:53:22.364015    9176 start.go:806] validating driver "docker" against <nil>
	I0601 19:53:22.364015    9176 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 19:53:22.542021    9176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:53:25.281395    9176 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.739232s)
	I0601 19:53:25.282385    9176 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-06-01 19:53:23.9101328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:53:25.282385    9176 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 19:53:25.283391    9176 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 19:53:25.290394    9176 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 19:53:25.294431    9176 cni.go:95] Creating CNI manager for "calico"
	I0601 19:53:25.294431    9176 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0601 19:53:25.294431    9176 start_flags.go:306] config:
	{Name:calico-20220601193451-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220601193451-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 19:53:25.300379    9176 out.go:177] * Starting control plane node calico-20220601193451-3412 in cluster calico-20220601193451-3412
	I0601 19:53:25.302374    9176 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 19:53:25.306373    9176 out.go:177] * Pulling base image ...
	I0601 19:53:25.308850    9176 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:53:25.308850    9176 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 19:53:25.309509    9176 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 19:53:25.309509    9176 cache.go:57] Caching tarball of preloaded images
	I0601 19:53:25.309509    9176 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 19:53:25.309509    9176 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 19:53:25.309509    9176 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\config.json ...
	I0601 19:53:25.310381    9176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\config.json: {Name:mk34fa8c6549a061c25bec40ebcb038e7b61cbb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:53:26.754625    9176 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 19:53:26.754625    9176 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 19:53:26.754625    9176 cache.go:206] Successfully downloaded all kic artifacts
	I0601 19:53:26.754625    9176 start.go:352] acquiring machines lock for calico-20220601193451-3412: {Name:mkfcf1be39a784d8180676a112ebb8ae3609b9d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 19:53:26.754625    9176 start.go:356] acquired machines lock for "calico-20220601193451-3412" in 0s
	I0601 19:53:26.754625    9176 start.go:91] Provisioning new machine with config: &{Name:calico-20220601193451-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220601193451-3412 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 19:53:26.754625    9176 start.go:131] createHost starting for "" (driver="docker")
	I0601 19:53:26.760635    9176 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 19:53:26.760635    9176 start.go:165] libmachine.API.Create for "calico-20220601193451-3412" (driver="docker")
	I0601 19:53:26.760635    9176 client.go:168] LocalClient.Create starting
	I0601 19:53:26.760635    9176 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0601 19:53:26.761628    9176 main.go:134] libmachine: Decoding PEM data...
	I0601 19:53:26.761628    9176 main.go:134] libmachine: Parsing certificate...
	I0601 19:53:26.761628    9176 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0601 19:53:26.761628    9176 main.go:134] libmachine: Decoding PEM data...
	I0601 19:53:26.761628    9176 main.go:134] libmachine: Parsing certificate...
	I0601 19:53:26.770630    9176 cli_runner.go:164] Run: docker network inspect calico-20220601193451-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 19:53:28.023896    9176 cli_runner.go:211] docker network inspect calico-20220601193451-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 19:53:28.023896    9176 cli_runner.go:217] Completed: docker network inspect calico-20220601193451-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2532002s)
	I0601 19:53:28.030904    9176 network_create.go:272] running [docker network inspect calico-20220601193451-3412] to gather additional debugging logs...
	I0601 19:53:28.030904    9176 cli_runner.go:164] Run: docker network inspect calico-20220601193451-3412
	W0601 19:53:29.216451    9176 cli_runner.go:211] docker network inspect calico-20220601193451-3412 returned with exit code 1
	I0601 19:53:29.216451    9176 cli_runner.go:217] Completed: docker network inspect calico-20220601193451-3412: (1.1854849s)
	I0601 19:53:29.216451    9176 network_create.go:275] error running [docker network inspect calico-20220601193451-3412]: docker network inspect calico-20220601193451-3412: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220601193451-3412
	I0601 19:53:29.216451    9176 network_create.go:277] output of [docker network inspect calico-20220601193451-3412]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220601193451-3412
	
	** /stderr **
	I0601 19:53:29.223459    9176 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 19:53:30.373290    9176 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1486207s)
	I0601 19:53:30.398443    9176 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006ed0] misses:0}
	I0601 19:53:30.398443    9176 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:30.398443    9176 network_create.go:115] attempt to create docker network calico-20220601193451-3412 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 19:53:30.406056    9176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412
	W0601 19:53:31.564118    9176 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412 returned with exit code 1
	I0601 19:53:31.564118    9176 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412: (1.158002s)
	W0601 19:53:31.564118    9176 network_create.go:107] failed to create docker network calico-20220601193451-3412 192.168.49.0/24, will retry: subnet is taken
	I0601 19:53:31.584133    9176 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006ed0] amended:false}} dirty:map[] misses:0}
	I0601 19:53:31.584133    9176 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:31.605132    9176 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006ed0] amended:true}} dirty:map[192.168.49.0:0xc000006ed0 192.168.58.0:0xc0005b8688] misses:0}
	I0601 19:53:31.605132    9176 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:53:31.605132    9176 network_create.go:115] attempt to create docker network calico-20220601193451-3412 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 19:53:31.613126    9176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412
	I0601 19:53:32.902988    9176 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601193451-3412: (1.2897117s)
	I0601 19:53:32.902988    9176 network_create.go:99] docker network calico-20220601193451-3412 192.168.58.0/24 created
	I0601 19:53:32.902988    9176 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20220601193451-3412" container
	I0601 19:53:32.918221    9176 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 19:53:34.045134    9176 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1267999s)
	I0601 19:53:34.052300    9176 cli_runner.go:164] Run: docker volume create calico-20220601193451-3412 --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true
	I0601 19:53:35.238060    9176 cli_runner.go:217] Completed: docker volume create calico-20220601193451-3412 --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true: (1.1855506s)
	I0601 19:53:35.238060    9176 oci.go:103] Successfully created a docker volume calico-20220601193451-3412
	I0601 19:53:35.245553    9176 cli_runner.go:164] Run: docker run --rm --name calico-20220601193451-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --entrypoint /usr/bin/test -v calico-20220601193451-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 19:53:38.227234    9176 cli_runner.go:217] Completed: docker run --rm --name calico-20220601193451-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --entrypoint /usr/bin/test -v calico-20220601193451-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib: (2.9814934s)
	I0601 19:53:38.227303    9176 oci.go:107] Successfully prepared a docker volume calico-20220601193451-3412
	I0601 19:53:38.227334    9176 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:53:38.227386    9176 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 19:53:38.235022    9176 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220601193451-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 19:54:04.088576    9176 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220601193451-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (25.8522145s)
	I0601 19:54:04.088576    9176 kic.go:188] duration metric: took 25.859850 seconds to extract preloaded images to volume
	I0601 19:54:04.096562    9176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:54:06.274144    9176 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1774692s)
	I0601 19:54:06.274144    9176 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:82 OomKillDisable:true NGoroutines:70 SystemTime:2022-06-01 19:54:05.1748376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:54:06.281148    9176 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 19:54:08.538132    9176 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.256867s)
	I0601 19:54:08.545094    9176 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220601193451-3412 --name calico-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220601193451-3412 --network calico-20220601193451-3412 --ip 192.168.58.2 --volume calico-20220601193451-3412:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 19:54:11.930358    9176 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220601193451-3412 --name calico-20220601193451-3412 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220601193451-3412 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220601193451-3412 --network calico-20220601193451-3412 --ip 192.168.58.2 --volume calico-20220601193451-3412:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a: (3.3850889s)
	I0601 19:54:11.939361    9176 cli_runner.go:164] Run: docker container inspect calico-20220601193451-3412 --format={{.State.Running}}
	I0601 19:54:13.163906    9176 cli_runner.go:217] Completed: docker container inspect calico-20220601193451-3412 --format={{.State.Running}}: (1.2244812s)
	I0601 19:54:13.169910    9176 cli_runner.go:164] Run: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}
	I0601 19:54:14.270845    9176 cli_runner.go:217] Completed: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}: (1.1008781s)
	I0601 19:54:14.277853    9176 cli_runner.go:164] Run: docker exec calico-20220601193451-3412 stat /var/lib/dpkg/alternatives/iptables
	I0601 19:54:15.646579    9176 cli_runner.go:217] Completed: docker exec calico-20220601193451-3412 stat /var/lib/dpkg/alternatives/iptables: (1.3686543s)
	I0601 19:54:15.646579    9176 oci.go:247] the created container "calico-20220601193451-3412" has a running status.
	I0601 19:54:15.646579    9176 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa...
	I0601 19:54:16.040019    9176 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 19:54:17.338639    9176 cli_runner.go:164] Run: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}
	I0601 19:54:18.490972    9176 cli_runner.go:217] Completed: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}: (1.1522736s)
	I0601 19:54:18.509469    9176 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 19:54:18.509469    9176 kic_runner.go:114] Args: [docker exec --privileged calico-20220601193451-3412 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 19:54:19.813889    9176 kic_runner.go:123] Done: [docker exec --privileged calico-20220601193451-3412 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3043526s)
	I0601 19:54:19.816888    9176 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa...
	I0601 19:54:20.363872    9176 cli_runner.go:164] Run: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}
	I0601 19:54:21.536251    9176 cli_runner.go:217] Completed: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}: (1.1722018s)
	I0601 19:54:21.536446    9176 machine.go:88] provisioning docker machine ...
	I0601 19:54:21.536489    9176 ubuntu.go:169] provisioning hostname "calico-20220601193451-3412"
	I0601 19:54:21.545576    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:22.701788    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1561521s)
	I0601 19:54:22.705768    9176 main.go:134] libmachine: Using SSH client type: native
	I0601 19:54:22.711775    9176 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61349 <nil> <nil>}
	I0601 19:54:22.712807    9176 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220601193451-3412 && echo "calico-20220601193451-3412" | sudo tee /etc/hostname
	I0601 19:54:22.950643    9176 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220601193451-3412
	
	I0601 19:54:22.962340    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:24.070804    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1083162s)
	I0601 19:54:24.077547    9176 main.go:134] libmachine: Using SSH client type: native
	I0601 19:54:24.077910    9176 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61349 <nil> <nil>}
	I0601 19:54:24.077960    9176 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220601193451-3412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220601193451-3412/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220601193451-3412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 19:54:24.288980    9176 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 19:54:24.289050    9176 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0601 19:54:24.289050    9176 ubuntu.go:177] setting up certificates
	I0601 19:54:24.289050    9176 provision.go:83] configureAuth start
	I0601 19:54:24.298125    9176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220601193451-3412
	I0601 19:54:25.412090    9176 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220601193451-3412: (1.1139068s)
	I0601 19:54:25.412266    9176 provision.go:138] copyHostCerts
	I0601 19:54:25.412657    9176 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0601 19:54:25.412694    9176 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0601 19:54:25.413030    9176 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0601 19:54:25.413030    9176 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0601 19:54:25.413030    9176 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0601 19:54:25.414325    9176 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0601 19:54:25.415023    9176 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0601 19:54:25.415023    9176 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0601 19:54:25.415023    9176 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I0601 19:54:25.416506    9176 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220601193451-3412 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220601193451-3412]
	I0601 19:54:25.546358    9176 provision.go:172] copyRemoteCerts
	I0601 19:54:25.557757    9176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 19:54:25.563845    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:26.682817    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1189142s)
	I0601 19:54:26.682817    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61349 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa Username:docker}
	I0601 19:54:26.845506    9176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2871267s)
	I0601 19:54:26.846229    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 19:54:26.912216    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0601 19:54:26.976103    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 19:54:27.036807    9176 provision.go:86] duration metric: configureAuth took 2.7476141s
	I0601 19:54:27.036807    9176 ubuntu.go:193] setting minikube options for container-runtime
	I0601 19:54:27.037957    9176 config.go:178] Loaded profile config "calico-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:54:27.047697    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:28.213291    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1655343s)
	I0601 19:54:28.217277    9176 main.go:134] libmachine: Using SSH client type: native
	I0601 19:54:28.218276    9176 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61349 <nil> <nil>}
	I0601 19:54:28.218276    9176 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 19:54:28.424380    9176 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 19:54:28.424380    9176 ubuntu.go:71] root file system type: overlay
	I0601 19:54:28.425378    9176 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 19:54:28.433472    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:29.622383    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1888504s)
	I0601 19:54:29.630219    9176 main.go:134] libmachine: Using SSH client type: native
	I0601 19:54:29.630219    9176 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61349 <nil> <nil>}
	I0601 19:54:29.630219    9176 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 19:54:29.871959    9176 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 19:54:29.879479    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:30.993227    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1136897s)
	I0601 19:54:30.996836    9176 main.go:134] libmachine: Using SSH client type: native
	I0601 19:54:30.997435    9176 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61349 <nil> <nil>}
	I0601 19:54:30.997435    9176 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 19:54:32.373707    9176 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 19:54:29.843216000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 19:54:32.373775    9176 machine.go:91] provisioned docker machine in 10.8367242s
	I0601 19:54:32.373844    9176 client.go:171] LocalClient.Create took 1m5.6098091s
	I0601 19:54:32.373844    9176 start.go:173] duration metric: libmachine.API.Create for "calico-20220601193451-3412" took 1m5.6098091s
	I0601 19:54:32.374073    9176 start.go:306] post-start starting for "calico-20220601193451-3412" (driver="docker")
	I0601 19:54:32.374140    9176 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 19:54:32.391801    9176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 19:54:32.401791    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:33.562102    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1602508s)
	I0601 19:54:33.562102    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61349 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa Username:docker}
	I0601 19:54:33.663563    9176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2716955s)
	I0601 19:54:33.675589    9176 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 19:54:33.689004    9176 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 19:54:33.689230    9176 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 19:54:33.689230    9176 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 19:54:33.689230    9176 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 19:54:33.689230    9176 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0601 19:54:33.689750    9176 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0601 19:54:33.690795    9176 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem -> 34122.pem in /etc/ssl/certs
	I0601 19:54:33.704291    9176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 19:54:33.740134    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem --> /etc/ssl/certs/34122.pem (1708 bytes)
	I0601 19:54:33.795480    9176 start.go:309] post-start completed in 1.4212656s
	I0601 19:54:33.810499    9176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220601193451-3412
	I0601 19:54:34.951115    9176 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220601193451-3412: (1.1405564s)
	I0601 19:54:34.951668    9176 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\config.json ...
	I0601 19:54:34.969134    9176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 19:54:34.977133    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:36.157064    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1798698s)
	I0601 19:54:36.157064    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61349 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa Username:docker}
	I0601 19:54:36.293775    9176 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3245727s)
	I0601 19:54:36.303782    9176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 19:54:36.316780    9176 start.go:134] duration metric: createHost completed in 1m9.5585506s
	I0601 19:54:36.316780    9176 start.go:81] releasing machines lock for "calico-20220601193451-3412", held for 1m9.5585506s
	I0601 19:54:36.326785    9176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220601193451-3412
	I0601 19:54:37.460293    9176 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220601193451-3412: (1.1334492s)
	I0601 19:54:37.462288    9176 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 19:54:37.469299    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:37.470305    9176 ssh_runner.go:195] Run: systemctl --version
	I0601 19:54:37.480288    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:38.650582    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.1700929s)
	I0601 19:54:38.651138    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61349 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa Username:docker}
	I0601 19:54:38.671483    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.2021219s)
	I0601 19:54:38.672478    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61349 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa Username:docker}
	I0601 19:54:38.730452    9176 ssh_runner.go:235] Completed: systemctl --version: (1.2600819s)
	I0601 19:54:38.743350    9176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 19:54:38.888551    9176 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.4261895s)
	I0601 19:54:38.905019    9176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 19:54:38.947370    9176 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 19:54:38.959582    9176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 19:54:38.999222    9176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 19:54:39.042227    9176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 19:54:39.232997    9176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 19:54:39.431555    9176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 19:54:39.470758    9176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 19:54:39.635951    9176 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 19:54:39.671127    9176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 19:54:39.767434    9176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 19:54:39.845639    9176 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 19:54:39.854755    9176 cli_runner.go:164] Run: docker exec -t calico-20220601193451-3412 dig +short host.docker.internal
	I0601 19:54:41.242739    9176 cli_runner.go:217] Completed: docker exec -t calico-20220601193451-3412 dig +short host.docker.internal: (1.3879122s)
	I0601 19:54:41.242739    9176 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 19:54:41.253670    9176 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 19:54:41.265494    9176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 19:54:41.302417    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:54:42.541908    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.2393673s)
	I0601 19:54:42.542648    9176 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:54:42.549795    9176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 19:54:42.647178    9176 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 19:54:42.647178    9176 docker.go:541] Images already preloaded, skipping extraction
	I0601 19:54:42.660625    9176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 19:54:42.749561    9176 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 19:54:42.749634    9176 cache_images.go:84] Images are preloaded, skipping loading
	I0601 19:54:42.759690    9176 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 19:54:42.995143    9176 cni.go:95] Creating CNI manager for "calico"
	I0601 19:54:42.995143    9176 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 19:54:42.995143    9176 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220601193451-3412 NodeName:calico-20220601193451-3412 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 19:54:42.995143    9176 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220601193451-3412"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 19:54:42.995143    9176 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220601193451-3412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:calico-20220601193451-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0601 19:54:43.005121    9176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 19:54:43.104255    9176 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 19:54:43.121841    9176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 19:54:43.148867    9176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0601 19:54:43.209470    9176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 19:54:43.248720    9176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0601 19:54:43.299633    9176 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 19:54:43.314632    9176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 19:54:43.339033    9176 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412 for IP: 192.168.58.2
	I0601 19:54:43.339033    9176 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0601 19:54:43.339633    9176 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0601 19:54:43.341083    9176 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\client.key
	I0601 19:54:43.341083    9176 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\client.crt with IP's: []
	I0601 19:54:43.875167    9176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\client.crt ...
	I0601 19:54:43.875167    9176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\client.crt: {Name:mk884aa1af75bab20eca73e5af02a2b2ed594b8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:54:43.877245    9176 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\client.key ...
	I0601 19:54:43.877245    9176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\client.key: {Name:mk3e819972a23bd14f0f7fad002413230e57e81f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:54:43.877490    9176 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.key.cee25041
	I0601 19:54:43.878494    9176 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 19:54:44.460367    9176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.crt.cee25041 ...
	I0601 19:54:44.460367    9176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.crt.cee25041: {Name:mk52d8e670ca3364f5a053c8ade6c801aa4378b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:54:44.461876    9176 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.key.cee25041 ...
	I0601 19:54:44.461876    9176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.key.cee25041: {Name:mkab5d0d2435b7302069181ddb04b3640797cef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:54:44.462093    9176 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.crt.cee25041 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.crt
	I0601 19:54:44.476006    9176 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.key.cee25041 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.key
	I0601 19:54:44.477228    9176 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\proxy-client.key
	I0601 19:54:44.477728    9176 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\proxy-client.crt with IP's: []
	I0601 19:54:45.104993    9176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\proxy-client.crt ...
	I0601 19:54:45.105078    9176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\proxy-client.crt: {Name:mk99605b4bc9b9dc6679f17881ad4fb223854d44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:54:45.105873    9176 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\proxy-client.key ...
	I0601 19:54:45.106883    9176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\proxy-client.key: {Name:mk664478799b6c8d95b2deafa2f8e05904344830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:54:45.120566    9176 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412.pem (1338 bytes)
	W0601 19:54:45.120667    9176 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412_empty.pem, impossibly tiny 0 bytes
	I0601 19:54:45.120667    9176 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0601 19:54:45.121375    9176 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0601 19:54:45.121647    9176 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0601 19:54:45.122308    9176 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0601 19:54:45.124658    9176 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem (1708 bytes)
	I0601 19:54:45.129347    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 19:54:45.191448    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 19:54:45.264463    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 19:54:45.321308    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220601193451-3412\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 19:54:45.373509    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 19:54:45.447318    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 19:54:45.502316    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 19:54:45.554312    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 19:54:45.605326    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 19:54:45.661420    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412.pem --> /usr/share/ca-certificates/3412.pem (1338 bytes)
	I0601 19:54:45.725574    9176 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem --> /usr/share/ca-certificates/34122.pem (1708 bytes)
	I0601 19:54:45.780868    9176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 19:54:45.839236    9176 ssh_runner.go:195] Run: openssl version
	I0601 19:54:45.870372    9176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 19:54:45.908882    9176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:54:45.919879    9176 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:46 /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:54:45.929878    9176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:54:45.957892    9176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 19:54:46.014156    9176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3412.pem && ln -fs /usr/share/ca-certificates/3412.pem /etc/ssl/certs/3412.pem"
	I0601 19:54:46.051967    9176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3412.pem
	I0601 19:54:46.075093    9176 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 17:56 /usr/share/ca-certificates/3412.pem
	I0601 19:54:46.088651    9176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3412.pem
	I0601 19:54:46.113766    9176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3412.pem /etc/ssl/certs/51391683.0"
	I0601 19:54:46.146760    9176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34122.pem && ln -fs /usr/share/ca-certificates/34122.pem /etc/ssl/certs/34122.pem"
	I0601 19:54:46.185505    9176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34122.pem
	I0601 19:54:46.195505    9176 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 17:56 /usr/share/ca-certificates/34122.pem
	I0601 19:54:46.204498    9176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34122.pem
	I0601 19:54:46.231747    9176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34122.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 19:54:46.254447    9176 kubeadm.go:395] StartCluster: {Name:calico-20220601193451-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220601193451-3412 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false}
	I0601 19:54:46.261446    9176 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 19:54:46.340296    9176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 19:54:46.379288    9176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 19:54:46.406076    9176 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 19:54:46.417749    9176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 19:54:46.436749    9176 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 19:54:46.436749    9176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 19:55:12.801618    9176 out.go:204]   - Generating certificates and keys ...
	I0601 19:55:12.808833    9176 out.go:204]   - Booting up control plane ...
	I0601 19:55:12.814832    9176 out.go:204]   - Configuring RBAC rules ...
	I0601 19:55:12.819821    9176 cni.go:95] Creating CNI manager for "calico"
	I0601 19:55:12.823835    9176 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0601 19:55:12.826841    9176 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 19:55:12.826841    9176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0601 19:55:12.928649    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 19:55:18.414267    9176 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (5.4852889s)
	I0601 19:55:18.414342    9176 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 19:55:18.433341    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=calico-20220601193451-3412 minikube.k8s.io/updated_at=2022_06_01T19_55_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:18.435340    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:18.438348    9176 ops.go:34] apiserver oom_adj: -16
	I0601 19:55:18.713622    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:19.354403    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:19.867710    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:20.361040    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:20.869383    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:21.354643    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:21.869524    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:22.366280    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:23.367349    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:24.358708    9176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:55:25.490144    9176 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.1313774s)
	I0601 19:55:25.490144    9176 kubeadm.go:1045] duration metric: took 7.0753608s to wait for elevateKubeSystemPrivileges.
	I0601 19:55:25.490144    9176 kubeadm.go:397] StartCluster complete in 39.2336679s
	I0601 19:55:25.490144    9176 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:55:25.491157    9176 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 19:55:25.493153    9176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:55:26.331133    9176 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220601193451-3412" rescaled to 1
	I0601 19:55:26.331742    9176 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 19:55:26.335730    9176 out.go:177] * Verifying Kubernetes components...
	I0601 19:55:26.331742    9176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 19:55:26.331742    9176 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 19:55:26.331742    9176 config.go:178] Loaded profile config "calico-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:55:26.335730    9176 addons.go:65] Setting storage-provisioner=true in profile "calico-20220601193451-3412"
	I0601 19:55:26.335730    9176 addons.go:65] Setting default-storageclass=true in profile "calico-20220601193451-3412"
	I0601 19:55:26.338796    9176 addons.go:153] Setting addon storage-provisioner=true in "calico-20220601193451-3412"
	W0601 19:55:26.338796    9176 addons.go:165] addon storage-provisioner should already be in state true
	I0601 19:55:26.338796    9176 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220601193451-3412"
	I0601 19:55:26.338796    9176 host.go:66] Checking if "calico-20220601193451-3412" exists ...
	I0601 19:55:26.348731    9176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 19:55:26.355729    9176 cli_runner.go:164] Run: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}
	I0601 19:55:26.355729    9176 cli_runner.go:164] Run: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}
	I0601 19:55:26.503525    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:55:26.921015    9176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 19:55:27.812071    9176 cli_runner.go:217] Completed: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}: (1.4562669s)
	I0601 19:55:27.859073    9176 cli_runner.go:217] Completed: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}: (1.5032666s)
	I0601 19:55:27.861072    9176 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 19:55:27.864086    9176 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 19:55:27.864086    9176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 19:55:27.871062    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:55:27.889232    9176 addons.go:153] Setting addon default-storageclass=true in "calico-20220601193451-3412"
	W0601 19:55:27.889232    9176 addons.go:165] addon default-storageclass should already be in state true
	I0601 19:55:27.889232    9176 host.go:66] Checking if "calico-20220601193451-3412" exists ...
	I0601 19:55:27.924231    9176 cli_runner.go:164] Run: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}
	I0601 19:55:27.969205    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.4656036s)
	I0601 19:55:27.972209    9176 node_ready.go:35] waiting up to 5m0s for node "calico-20220601193451-3412" to be "Ready" ...
	I0601 19:55:27.991215    9176 node_ready.go:49] node "calico-20220601193451-3412" has status "Ready":"True"
	I0601 19:55:27.991215    9176 node_ready.go:38] duration metric: took 19.0045ms waiting for node "calico-20220601193451-3412" to be "Ready" ...
	I0601 19:55:27.991215    9176 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 19:55:28.017222    9176 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace to be "Ready" ...
	I0601 19:55:29.229377    9176 cli_runner.go:217] Completed: docker container inspect calico-20220601193451-3412 --format={{.State.Status}}: (1.3050784s)
	I0601 19:55:29.229377    9176 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 19:55:29.229377    9176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 19:55:29.237331    9176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412
	I0601 19:55:29.245342    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.3742085s)
	I0601 19:55:29.245342    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61349 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa Username:docker}
	I0601 19:55:29.913274    9176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 19:55:30.473854    9176 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601193451-3412: (1.2364593s)
	I0601 19:55:30.473854    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61349 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220601193451-3412\id_rsa Username:docker}
	I0601 19:55:30.822848    9176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 19:55:31.207072    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:33.397570    9176 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.4762209s)
	I0601 19:55:33.397570    9176 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 19:55:33.813846    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:34.489837    9176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.5761528s)
	I0601 19:55:34.489874    9176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.6667926s)
	I0601 19:55:34.499275    9176 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 19:55:34.503545    9176 addons.go:417] enableAddons completed in 8.1713811s
	I0601 19:55:35.863050    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:38.200421    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:40.697117    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:43.302323    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:45.711014    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:48.135268    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:50.686449    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:53.135990    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:55.192182    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:57.195210    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:55:59.206863    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:01.635793    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:03.644317    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:05.707432    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:08.201732    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:10.692268    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:13.191871    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:15.207850    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:17.698224    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:19.705424    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:21.708557    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:24.291545    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:26.703612    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:28.706189    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:31.200388    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:33.697769    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:36.146093    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:38.644586    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:41.191119    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:43.636928    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:45.691519    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:48.188562    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:50.206198    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:52.692396    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:55.144612    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:57.204000    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:56:59.693898    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:02.128404    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:04.201808    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:06.691152    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:09.143786    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:11.195245    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:13.195563    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:15.636340    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:17.649548    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:19.694318    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:22.192326    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:24.212501    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:26.635171    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:28.643764    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:31.194250    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:33.203783    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:35.791588    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:38.197157    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:40.292376    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:42.640614    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:44.643458    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:47.131263    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:49.195370    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:51.632007    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:53.691893    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:55.716315    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:57:58.133406    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:00.135474    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:02.198641    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:04.207905    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:06.295429    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:08.706597    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:11.142235    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:13.194966    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:15.197432    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:17.206707    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:19.628103    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:21.638401    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:24.192598    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:26.301343    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:28.638117    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:30.638630    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:32.646474    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:35.163999    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:37.199690    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:39.717911    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:42.213318    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:44.698101    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:46.709544    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:49.142578    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:51.143680    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:53.195175    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:55.695825    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:58:58.197241    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:00.199367    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:02.642336    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:05.146034    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:07.648110    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:09.699273    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:12.146100    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:14.211000    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:16.299372    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:18.708603    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:21.204006    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:23.634730    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:25.712249    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:27.717511    9176 pod_ready.go:102] pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:28.223415    9176 pod_ready.go:81] duration metric: took 4m0.1938177s waiting for pod "calico-kube-controllers-8594699699-65vm2" in "kube-system" namespace to be "Ready" ...
	E0601 19:59:28.223533    9176 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0601 19:59:28.223533    9176 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-5ppq2" in "kube-system" namespace to be "Ready" ...
	I0601 19:59:30.404598    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:32.897326    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:34.897506    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:37.406208    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:39.840596    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:41.849932    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:44.401057    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:46.501083    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:48.839523    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:50.859133    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:53.403057    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:55.423913    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 19:59:57.923111    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:00.503585    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:02.902327    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:05.399894    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:07.913643    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:10.345364    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:12.399536    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:14.405573    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:16.408349    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:18.915394    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:20.916108    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:23.402720    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:25.404136    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:27.905593    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:30.417191    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:32.418669    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:34.849164    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:36.921471    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:39.336468    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:41.405693    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:43.911811    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:46.002862    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:48.412814    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:50.856979    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:52.905132    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:55.424449    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:00:57.860611    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:00.408336    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:02.848206    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:04.903635    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:06.906521    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:09.419571    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:11.843438    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:13.906265    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:15.918142    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:18.422197    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:20.839787    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:22.906854    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:25.421739    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:27.906479    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:29.922105    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:32.407513    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:34.842993    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:36.924724    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:39.348498    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:41.357568    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:43.842761    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:45.920281    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:48.347924    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:50.842879    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:52.854589    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:01:58.008109    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:00.404939    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:02.856720    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:04.863252    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:08.912720    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:11.408946    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:13.909591    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:16.506506    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:18.909185    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:21.025960    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:23.407972    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:25.511987    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:27.855849    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:29.925041    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:32.423092    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:34.865253    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:36.923702    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:39.356458    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:41.914222    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:44.371886    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:46.848298    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:48.856162    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:51.852075    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:54.363148    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:56.856202    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:02:58.912109    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:02.145988    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:04.353109    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:06.430727    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:08.925024    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:11.355685    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:13.356817    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:15.427516    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:17.909674    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:20.355297    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:22.857933    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:25.438476    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:27.909156    9176 pod_ready.go:102] pod "calico-node-5ppq2" in "kube-system" namespace has status "Ready":"False"
	I0601 20:03:28.428698    9176 pod_ready.go:81] duration metric: took 4m0.1927978s waiting for pod "calico-node-5ppq2" in "kube-system" namespace to be "Ready" ...
	E0601 20:03:28.429000    9176 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0601 20:03:28.429078    9176 pod_ready.go:38] duration metric: took 8m0.4131194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 20:03:28.452623    9176 out.go:177] 
	W0601 20:03:28.454716    9176 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0601 20:03:28.454716    9176 out.go:239] * 
	* 
	W0601 20:03:28.455969    9176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 20:03:28.459579    9176 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (612.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (367.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220601193442-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker
E0601 19:55:12.184381    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 19:55:36.942104    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 19:57:26.676457    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:26.691221    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:26.706460    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:26.737757    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:26.784563    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:26.879428    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:27.050822    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:27.379258    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:28.025425    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:29.312489    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:31.873274    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:37.008417    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:57:41.552674    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 19:57:47.253660    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:58:07.747094    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 19:58:48.719356    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20220601193442-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 80 (6m6.9177241s)

                                                
                                                
-- stdout --
	* [kindnet-20220601193442-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kindnet-20220601193442-3412 in cluster kindnet-20220601193442-3412
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 19:55:04.596954   10744 out.go:296] Setting OutFile to fd 1568 ...
	I0601 19:55:04.652874   10744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:55:04.652874   10744 out.go:309] Setting ErrFile to fd 1580...
	I0601 19:55:04.652874   10744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:55:04.663846   10744 out.go:303] Setting JSON to false
	I0601 19:55:04.665846   10744 start.go:115] hostinfo: {"hostname":"minikube4","uptime":73419,"bootTime":1654039885,"procs":164,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 19:55:04.666857   10744 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 19:55:04.668843   10744 out.go:177] * [kindnet-20220601193442-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 19:55:04.672847   10744 notify.go:193] Checking for updates...
	I0601 19:55:04.675837   10744 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 19:55:04.678847   10744 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 19:55:04.681863   10744 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 19:55:04.684863   10744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 19:55:04.689845   10744 config.go:178] Loaded profile config "calico-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:55:04.689845   10744 config.go:178] Loaded profile config "cilium-20220601193451-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:55:04.689845   10744 config.go:178] Loaded profile config "false-20220601193442-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:55:04.690845   10744 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 19:55:07.384788   10744 docker.go:137] docker version: linux-20.10.14
	I0601 19:55:07.399257   10744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:55:09.651854   10744 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2524798s)
	I0601 19:55:09.651854   10744 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-06-01 19:55:08.5105107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:55:09.656864   10744 out.go:177] * Using the docker driver based on user configuration
	I0601 19:55:09.658889   10744 start.go:284] selected driver: docker
	I0601 19:55:09.658889   10744 start.go:806] validating driver "docker" against <nil>
	I0601 19:55:09.658889   10744 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 19:55:09.811926   10744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:55:12.153804   10744 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3407681s)
	I0601 19:55:12.153804   10744 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-06-01 19:55:10.9435938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:55:12.153804   10744 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 19:55:12.154490   10744 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 19:55:12.157671   10744 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 19:55:12.159552   10744 cni.go:95] Creating CNI manager for "kindnet"
	I0601 19:55:12.159624   10744 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 19:55:12.159682   10744 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 19:55:12.159707   10744 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 19:55:12.159707   10744 start_flags.go:306] config:
	{Name:kindnet-20220601193442-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220601193442-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 19:55:12.166365   10744 out.go:177] * Starting control plane node kindnet-20220601193442-3412 in cluster kindnet-20220601193442-3412
	I0601 19:55:12.169039   10744 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 19:55:12.170941   10744 out.go:177] * Pulling base image ...
	I0601 19:55:12.173940   10744 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:55:12.173940   10744 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 19:55:12.174465   10744 cache.go:57] Caching tarball of preloaded images
	I0601 19:55:12.174546   10744 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 19:55:12.174798   10744 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 19:55:12.174798   10744 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 19:55:12.174798   10744 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\config.json ...
	I0601 19:55:12.175416   10744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\config.json: {Name:mkf5c9f5c01c02c3425f77434f7023927b011251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:55:13.439106   10744 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 19:55:13.439278   10744 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 19:55:13.439369   10744 cache.go:206] Successfully downloaded all kic artifacts
	I0601 19:55:13.439513   10744 start.go:352] acquiring machines lock for kindnet-20220601193442-3412: {Name:mkbf09aea28500c14ac59344105ce233bbf09806 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 19:55:13.439766   10744 start.go:356] acquired machines lock for "kindnet-20220601193442-3412" in 201.6µs
	I0601 19:55:13.440041   10744 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220601193442-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220601193442-3412 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 19:55:13.440224   10744 start.go:131] createHost starting for "" (driver="docker")
	I0601 19:55:13.443361   10744 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 19:55:13.443615   10744 start.go:165] libmachine.API.Create for "kindnet-20220601193442-3412" (driver="docker")
	I0601 19:55:13.443615   10744 client.go:168] LocalClient.Create starting
	I0601 19:55:13.444249   10744 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0601 19:55:13.444249   10744 main.go:134] libmachine: Decoding PEM data...
	I0601 19:55:13.444249   10744 main.go:134] libmachine: Parsing certificate...
	I0601 19:55:13.444249   10744 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0601 19:55:13.444792   10744 main.go:134] libmachine: Decoding PEM data...
	I0601 19:55:13.444888   10744 main.go:134] libmachine: Parsing certificate...
	I0601 19:55:13.453042   10744 cli_runner.go:164] Run: docker network inspect kindnet-20220601193442-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 19:55:14.792433   10744 cli_runner.go:211] docker network inspect kindnet-20220601193442-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 19:55:14.792965   10744 cli_runner.go:217] Completed: docker network inspect kindnet-20220601193442-3412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.339322s)
	I0601 19:55:14.807379   10744 network_create.go:272] running [docker network inspect kindnet-20220601193442-3412] to gather additional debugging logs...
	I0601 19:55:14.807379   10744 cli_runner.go:164] Run: docker network inspect kindnet-20220601193442-3412
	W0601 19:55:15.970789   10744 cli_runner.go:211] docker network inspect kindnet-20220601193442-3412 returned with exit code 1
	I0601 19:55:15.970789   10744 cli_runner.go:217] Completed: docker network inspect kindnet-20220601193442-3412: (1.1633491s)
	I0601 19:55:15.970789   10744 network_create.go:275] error running [docker network inspect kindnet-20220601193442-3412]: docker network inspect kindnet-20220601193442-3412: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220601193442-3412
	I0601 19:55:15.970789   10744 network_create.go:277] output of [docker network inspect kindnet-20220601193442-3412]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220601193442-3412
	
	** /stderr **
	I0601 19:55:15.980398   10744 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 19:55:17.216318   10744 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2358565s)
	I0601 19:55:17.240325   10744 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00081c4a8] misses:0}
	I0601 19:55:17.240432   10744 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 19:55:17.240432   10744 network_create.go:115] attempt to create docker network kindnet-20220601193442-3412 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 19:55:17.247328   10744 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601193442-3412
	I0601 19:55:18.571057   10744 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601193442-3412: (1.3234559s)
	I0601 19:55:18.571057   10744 network_create.go:99] docker network kindnet-20220601193442-3412 192.168.49.0/24 created
	I0601 19:55:18.571057   10744 kic.go:106] calculated static IP "192.168.49.2" for the "kindnet-20220601193442-3412" container
	I0601 19:55:18.592379   10744 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 19:55:19.811463   10744 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2190211s)
	I0601 19:55:19.818447   10744 cli_runner.go:164] Run: docker volume create kindnet-20220601193442-3412 --label name.minikube.sigs.k8s.io=kindnet-20220601193442-3412 --label created_by.minikube.sigs.k8s.io=true
	I0601 19:55:21.030109   10744 cli_runner.go:217] Completed: docker volume create kindnet-20220601193442-3412 --label name.minikube.sigs.k8s.io=kindnet-20220601193442-3412 --label created_by.minikube.sigs.k8s.io=true: (1.2115994s)
	I0601 19:55:21.030109   10744 oci.go:103] Successfully created a docker volume kindnet-20220601193442-3412
	I0601 19:55:21.040100   10744 cli_runner.go:164] Run: docker run --rm --name kindnet-20220601193442-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220601193442-3412 --entrypoint /usr/bin/test -v kindnet-20220601193442-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 19:55:23.988060   10744 cli_runner.go:217] Completed: docker run --rm --name kindnet-20220601193442-3412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220601193442-3412 --entrypoint /usr/bin/test -v kindnet-20220601193442-3412:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib: (2.9469396s)
	I0601 19:55:23.988217   10744 oci.go:107] Successfully prepared a docker volume kindnet-20220601193442-3412
	I0601 19:55:23.988290   10744 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:55:23.988382   10744 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 19:55:24.001838   10744 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220601193442-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 19:55:49.368816   10744 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220601193442-3412:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (25.3653761s)
	I0601 19:55:49.368939   10744 kic.go:188] duration metric: took 25.379124 seconds to extract preloaded images to volume
	I0601 19:55:49.379047   10744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 19:55:51.718762   10744 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3395939s)
	I0601 19:55:51.718762   10744 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-06-01 19:55:50.5611854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 19:55:51.726654   10744 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 19:55:53.947706   10744 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2209376s)
	I0601 19:55:53.956016   10744 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220601193442-3412 --name kindnet-20220601193442-3412 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220601193442-3412 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220601193442-3412 --network kindnet-20220601193442-3412 --ip 192.168.49.2 --volume kindnet-20220601193442-3412:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 19:55:56.524825   10744 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220601193442-3412 --name kindnet-20220601193442-3412 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220601193442-3412 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220601193442-3412 --network kindnet-20220601193442-3412 --ip 192.168.49.2 --volume kindnet-20220601193442-3412:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a: (2.5686765s)
	I0601 19:55:56.548839   10744 cli_runner.go:164] Run: docker container inspect kindnet-20220601193442-3412 --format={{.State.Running}}
	I0601 19:55:57.794541   10744 cli_runner.go:217] Completed: docker container inspect kindnet-20220601193442-3412 --format={{.State.Running}}: (1.2456375s)
	I0601 19:55:57.804529   10744 cli_runner.go:164] Run: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}
	I0601 19:55:59.159874   10744 cli_runner.go:217] Completed: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}: (1.3552749s)
	I0601 19:55:59.168908   10744 cli_runner.go:164] Run: docker exec kindnet-20220601193442-3412 stat /var/lib/dpkg/alternatives/iptables
	I0601 19:56:00.656573   10744 cli_runner.go:217] Completed: docker exec kindnet-20220601193442-3412 stat /var/lib/dpkg/alternatives/iptables: (1.487588s)
	I0601 19:56:00.656573   10744 oci.go:247] the created container "kindnet-20220601193442-3412" has a running status.
	I0601 19:56:00.656573   10744 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa...
	I0601 19:56:00.911948   10744 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 19:56:02.199715   10744 cli_runner.go:164] Run: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}
	I0601 19:56:03.413752   10744 cli_runner.go:217] Completed: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}: (1.213974s)
	I0601 19:56:03.431173   10744 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 19:56:03.431173   10744 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220601193442-3412 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 19:56:04.846964   10744 kic_runner.go:123] Done: [docker exec --privileged kindnet-20220601193442-3412 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.4156168s)
	I0601 19:56:04.850525   10744 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa...
	I0601 19:56:05.415297   10744 cli_runner.go:164] Run: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}
	I0601 19:56:06.576626   10744 cli_runner.go:217] Completed: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}: (1.1612696s)
	I0601 19:56:06.576626   10744 machine.go:88] provisioning docker machine ...
	I0601 19:56:06.576626   10744 ubuntu.go:169] provisioning hostname "kindnet-20220601193442-3412"
	I0601 19:56:06.583633   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:07.771721   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.188027s)
	I0601 19:56:07.775725   10744 main.go:134] libmachine: Using SSH client type: native
	I0601 19:56:07.783721   10744 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61429 <nil> <nil>}
	I0601 19:56:07.783721   10744 main.go:134] libmachine: About to run SSH command:
	sudo hostname kindnet-20220601193442-3412 && echo "kindnet-20220601193442-3412" | sudo tee /etc/hostname
	I0601 19:56:07.986695   10744 main.go:134] libmachine: SSH cmd err, output: <nil>: kindnet-20220601193442-3412
	
	I0601 19:56:08.000935   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:09.229519   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.2285213s)
	I0601 19:56:09.234504   10744 main.go:134] libmachine: Using SSH client type: native
	I0601 19:56:09.235526   10744 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61429 <nil> <nil>}
	I0601 19:56:09.235526   10744 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220601193442-3412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220601193442-3412/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220601193442-3412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 19:56:09.369184   10744 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 19:56:09.369184   10744 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0601 19:56:09.369184   10744 ubuntu.go:177] setting up certificates
	I0601 19:56:09.369184   10744 provision.go:83] configureAuth start
	I0601 19:56:09.378161   10744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220601193442-3412
	I0601 19:56:10.705479   10744 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220601193442-3412: (1.3272501s)
	I0601 19:56:10.705479   10744 provision.go:138] copyHostCerts
	I0601 19:56:10.706482   10744 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0601 19:56:10.706482   10744 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0601 19:56:10.707478   10744 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0601 19:56:10.709454   10744 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0601 19:56:10.709454   10744 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0601 19:56:10.709454   10744 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0601 19:56:10.710468   10744 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0601 19:56:10.710468   10744 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0601 19:56:10.711528   10744 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I0601 19:56:10.712489   10744 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-20220601193442-3412 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220601193442-3412]
	I0601 19:56:11.059449   10744 provision.go:172] copyRemoteCerts
	I0601 19:56:11.069452   10744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 19:56:11.076474   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:12.510080   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.4335324s)
	I0601 19:56:12.510080   10744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61429 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa Username:docker}
	I0601 19:56:12.662081   10744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.5925467s)
	I0601 19:56:12.662081   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 19:56:12.729607   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0601 19:56:12.780955   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 19:56:12.848951   10744 provision.go:86] duration metric: configureAuth took 3.4795867s
	I0601 19:56:12.848951   10744 ubuntu.go:193] setting minikube options for container-runtime
	I0601 19:56:12.849945   10744 config.go:178] Loaded profile config "kindnet-20220601193442-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:56:12.856953   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:14.123167   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.2660393s)
	I0601 19:56:14.131009   10744 main.go:134] libmachine: Using SSH client type: native
	I0601 19:56:14.131591   10744 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61429 <nil> <nil>}
	I0601 19:56:14.131642   10744 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 19:56:14.315076   10744 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 19:56:14.315166   10744 ubuntu.go:71] root file system type: overlay
	I0601 19:56:14.315166   10744 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 19:56:14.324489   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:15.556920   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.2323212s)
	I0601 19:56:15.559926   10744 main.go:134] libmachine: Using SSH client type: native
	I0601 19:56:15.560930   10744 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61429 <nil> <nil>}
	I0601 19:56:15.560930   10744 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 19:56:15.773635   10744 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 19:56:15.780157   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:17.013796   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.2335749s)
	I0601 19:56:17.017803   10744 main.go:134] libmachine: Using SSH client type: native
	I0601 19:56:17.017803   10744 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xd42ea0] 0xd45d00 <nil>  [] 0s} 127.0.0.1 61429 <nil> <nil>}
	I0601 19:56:17.017803   10744 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 19:56:18.706417   10744 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 19:56:15.753262000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 19:56:18.707198   10744 machine.go:91] provisioned docker machine in 12.1299464s
	I0601 19:56:18.707198   10744 client.go:171] LocalClient.Create took 1m5.2602144s
	I0601 19:56:18.707198   10744 start.go:173] duration metric: libmachine.API.Create for "kindnet-20220601193442-3412" took 1m5.2602144s
	I0601 19:56:18.707198   10744 start.go:306] post-start starting for "kindnet-20220601193442-3412" (driver="docker")
	I0601 19:56:18.707198   10744 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 19:56:18.717201   10744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 19:56:18.724492   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:19.956621   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.2319853s)
	I0601 19:56:19.956621   10744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61429 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa Username:docker}
	I0601 19:56:20.109562   10744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3922885s)
	I0601 19:56:20.121020   10744 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 19:56:20.134530   10744 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 19:56:20.134530   10744 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 19:56:20.134597   10744 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 19:56:20.134597   10744 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 19:56:20.134597   10744 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0601 19:56:20.135127   10744 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0601 19:56:20.136258   10744 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem -> 34122.pem in /etc/ssl/certs
	I0601 19:56:20.149139   10744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 19:56:20.170999   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem --> /etc/ssl/certs/34122.pem (1708 bytes)
	I0601 19:56:20.221831   10744 start.go:309] post-start completed in 1.5145545s
	I0601 19:56:20.230828   10744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220601193442-3412
	I0601 19:56:21.433721   10744 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220601193442-3412: (1.202831s)
	I0601 19:56:21.434563   10744 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\config.json ...
	I0601 19:56:21.445560   10744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 19:56:21.452585   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:22.617606   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.1649611s)
	I0601 19:56:22.617606   10744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61429 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa Username:docker}
	I0601 19:56:22.744040   10744 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2974121s)
	I0601 19:56:22.753034   10744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 19:56:22.767552   10744 start.go:134] duration metric: createHost completed in 1m9.3237047s
	I0601 19:56:22.767552   10744 start.go:81] releasing machines lock for "kindnet-20220601193442-3412", held for 1m9.3241618s
	I0601 19:56:22.774538   10744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220601193442-3412
	I0601 19:56:23.963339   10744 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220601193442-3412: (1.1887396s)
	I0601 19:56:23.965346   10744 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 19:56:23.972364   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:23.973346   10744 ssh_runner.go:195] Run: systemctl --version
	I0601 19:56:23.980344   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:25.176014   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.203535s)
	I0601 19:56:25.176698   10744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61429 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa Username:docker}
	I0601 19:56:25.202727   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.2223197s)
	I0601 19:56:25.202727   10744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61429 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa Username:docker}
	I0601 19:56:25.404549   10744 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.4391295s)
	I0601 19:56:25.404549   10744 ssh_runner.go:235] Completed: systemctl --version: (1.4311295s)
	I0601 19:56:25.414576   10744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 19:56:25.449640   10744 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 19:56:25.475916   10744 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 19:56:25.490249   10744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 19:56:25.523006   10744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 19:56:25.566701   10744 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 19:56:25.733394   10744 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 19:56:25.943712   10744 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 19:56:26.006133   10744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 19:56:26.213673   10744 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 19:56:26.252395   10744 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 19:56:26.363104   10744 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 19:56:26.455014   10744 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 19:56:26.463015   10744 cli_runner.go:164] Run: docker exec -t kindnet-20220601193442-3412 dig +short host.docker.internal
	I0601 19:56:27.832741   10744 cli_runner.go:217] Completed: docker exec -t kindnet-20220601193442-3412 dig +short host.docker.internal: (1.3696552s)
	I0601 19:56:27.832741   10744 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 19:56:27.841754   10744 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 19:56:27.855833   10744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 19:56:27.890340   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:56:29.083360   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.1929578s)
	I0601 19:56:29.087371   10744 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 19:56:29.090368   10744 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 19:56:29.104397   10744 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 19:56:29.172924   10744 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 19:56:29.172924   10744 docker.go:541] Images already preloaded, skipping extraction
	I0601 19:56:29.178943   10744 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 19:56:29.252015   10744 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 19:56:29.252015   10744 cache_images.go:84] Images are preloaded, skipping loading
	I0601 19:56:29.260692   10744 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 19:56:29.493505   10744 cni.go:95] Creating CNI manager for "kindnet"
	I0601 19:56:29.493540   10744 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 19:56:29.493639   10744 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220601193442-3412 NodeName:kindnet-20220601193442-3412 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 19:56:29.494026   10744 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20220601193442-3412"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 19:56:29.494224   10744 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220601193442-3412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220601193442-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0601 19:56:29.511440   10744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 19:56:29.536682   10744 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 19:56:29.550419   10744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 19:56:29.570412   10744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (405 bytes)
	I0601 19:56:29.610432   10744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 19:56:29.642427   10744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0601 19:56:29.692164   10744 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 19:56:29.704154   10744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 19:56:29.726179   10744 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412 for IP: 192.168.49.2
	I0601 19:56:29.726179   10744 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0601 19:56:29.727407   10744 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0601 19:56:29.727407   10744 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\client.key
	I0601 19:56:29.727407   10744 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\client.crt with IP's: []
	I0601 19:56:30.014891   10744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\client.crt ...
	I0601 19:56:30.014891   10744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\client.crt: {Name:mk937b1fb8ced05828d29750432f980b54ed6d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:56:30.015458   10744 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\client.key ...
	I0601 19:56:30.015458   10744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\client.key: {Name:mkf27b0d1a707ff704cc335263c589be7339258d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:56:30.016556   10744 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.key.dd3b5fb2
	I0601 19:56:30.017560   10744 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 19:56:30.460841   10744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.crt.dd3b5fb2 ...
	I0601 19:56:30.460841   10744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.crt.dd3b5fb2: {Name:mkc43c83d82b5234f482ce300c00bd1b310f6f99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:56:30.462754   10744 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.key.dd3b5fb2 ...
	I0601 19:56:30.462754   10744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.key.dd3b5fb2: {Name:mk6f12ccf8e0151c614b3b6eb8b8cce60ca0f487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:56:30.463754   10744 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.crt
	I0601 19:56:30.471551   10744 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.key
	I0601 19:56:30.472499   10744 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\proxy-client.key
	I0601 19:56:30.473425   10744 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\proxy-client.crt with IP's: []
	I0601 19:56:30.694151   10744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\proxy-client.crt ...
	I0601 19:56:30.694252   10744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\proxy-client.crt: {Name:mk0df0c2eb1ab3f744f678ae3b7d1ce416150208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:56:30.696268   10744 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\proxy-client.key ...
	I0601 19:56:30.696268   10744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\proxy-client.key: {Name:mk301319f48fe71f777acd354687af7292c32f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:56:30.708136   10744 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412.pem (1338 bytes)
	W0601 19:56:30.708389   10744 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412_empty.pem, impossibly tiny 0 bytes
	I0601 19:56:30.708597   10744 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0601 19:56:30.708746   10744 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0601 19:56:30.708746   10744 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0601 19:56:30.708746   10744 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0601 19:56:30.710041   10744 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem (1708 bytes)
	I0601 19:56:30.712581   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 19:56:30.770142   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 19:56:30.826736   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 19:56:30.883271   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220601193442-3412\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 19:56:30.942879   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 19:56:30.993275   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 19:56:31.054161   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 19:56:31.114527   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 19:56:31.169159   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 19:56:31.227526   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\3412.pem --> /usr/share/ca-certificates/3412.pem (1338 bytes)
	I0601 19:56:31.276958   10744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\34122.pem --> /usr/share/ca-certificates/34122.pem (1708 bytes)
	I0601 19:56:31.331968   10744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 19:56:31.372968   10744 ssh_runner.go:195] Run: openssl version
	I0601 19:56:31.402996   10744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3412.pem && ln -fs /usr/share/ca-certificates/3412.pem /etc/ssl/certs/3412.pem"
	I0601 19:56:31.435962   10744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3412.pem
	I0601 19:56:31.444968   10744 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 17:56 /usr/share/ca-certificates/3412.pem
	I0601 19:56:31.455962   10744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3412.pem
	I0601 19:56:31.475977   10744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3412.pem /etc/ssl/certs/51391683.0"
	I0601 19:56:31.510973   10744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34122.pem && ln -fs /usr/share/ca-certificates/34122.pem /etc/ssl/certs/34122.pem"
	I0601 19:56:31.540989   10744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34122.pem
	I0601 19:56:31.551979   10744 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 17:56 /usr/share/ca-certificates/34122.pem
	I0601 19:56:31.560977   10744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34122.pem
	I0601 19:56:31.582980   10744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34122.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 19:56:31.621989   10744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 19:56:31.657984   10744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:56:31.666978   10744 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:46 /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:56:31.674973   10744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 19:56:31.701981   10744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 19:56:31.726981   10744 kubeadm.go:395] StartCluster: {Name:kindnet-20220601193442-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220601193442-3412 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 19:56:31.733984   10744 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 19:56:31.808013   10744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 19:56:31.840277   10744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 19:56:31.865907   10744 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 19:56:31.876168   10744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 19:56:31.901682   10744 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 19:56:31.901682   10744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 19:56:32.985462   10744 out.go:204]   - Generating certificates and keys ...
	I0601 19:56:37.994333   10744 out.go:204]   - Booting up control plane ...
	I0601 19:56:53.816726   10744 out.go:204]   - Configuring RBAC rules ...
	I0601 19:56:55.522653   10744 cni.go:95] Creating CNI manager for "kindnet"
	I0601 19:56:55.527741   10744 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 19:56:55.546393   10744 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 19:56:55.605507   10744 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 19:56:55.605507   10744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 19:56:55.725603   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 19:56:59.190572   10744 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.4647901s)
	I0601 19:56:59.190772   10744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 19:56:59.211821   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:56:59.211821   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=kindnet-20220601193442-3412 minikube.k8s.io/updated_at=2022_06_01T19_56_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:56:59.215798   10744 ops.go:34] apiserver oom_adj: -16
	I0601 19:56:59.511178   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:00.167177   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:00.660790   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:01.161543   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:01.663399   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:02.148205   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:02.653104   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:03.159320   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:03.658875   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:04.162498   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:04.650222   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:05.155007   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:05.654208   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:06.159955   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:06.659359   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:07.695496   10744 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.0360841s)
	I0601 19:57:08.153195   10744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 19:57:08.410817   10744 kubeadm.go:1045] duration metric: took 9.2195211s to wait for elevateKubeSystemPrivileges.
	I0601 19:57:08.410895   10744 kubeadm.go:397] StartCluster complete in 36.6820255s
	I0601 19:57:08.410932   10744 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:57:08.411305   10744 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 19:57:08.415099   10744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 19:57:09.090459   10744 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220601193442-3412" rescaled to 1
	I0601 19:57:09.090459   10744 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 19:57:09.090459   10744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 19:57:09.094997   10744 out.go:177] * Verifying Kubernetes components...
	I0601 19:57:09.090459   10744 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 19:57:09.091706   10744 config.go:178] Loaded profile config "kindnet-20220601193442-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:57:09.098310   10744 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220601193442-3412"
	I0601 19:57:09.098310   10744 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220601193442-3412"
	W0601 19:57:09.098310   10744 addons.go:165] addon storage-provisioner should already be in state true
	I0601 19:57:09.098310   10744 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220601193442-3412"
	I0601 19:57:09.098310   10744 host.go:66] Checking if "kindnet-20220601193442-3412" exists ...
	I0601 19:57:09.098310   10744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220601193442-3412"
	I0601 19:57:09.121778   10744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 19:57:09.131793   10744 cli_runner.go:164] Run: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}
	I0601 19:57:09.133785   10744 cli_runner.go:164] Run: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}
	I0601 19:57:09.606975   10744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 19:57:09.625976   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:57:10.307058   10744 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 19:57:10.580157   10744 cli_runner.go:217] Completed: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}: (1.44603s)
	I0601 19:57:10.583525   10744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 19:57:10.586096   10744 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 19:57:10.586096   10744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 19:57:10.601757   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:57:10.608805   10744 cli_runner.go:217] Completed: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}: (1.476936s)
	I0601 19:57:10.624752   10744 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220601193442-3412"
	W0601 19:57:10.624752   10744 addons.go:165] addon default-storageclass should already be in state true
	I0601 19:57:10.624752   10744 host.go:66] Checking if "kindnet-20220601193442-3412" exists ...
	I0601 19:57:10.658642   10744 cli_runner.go:164] Run: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}
	I0601 19:57:11.146739   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.520685s)
	I0601 19:57:11.150766   10744 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220601193442-3412" to be "Ready" ...
	I0601 19:57:11.993006   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.3911777s)
	I0601 19:57:11.993006   10744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61429 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa Username:docker}
	I0601 19:57:12.040022   10744 cli_runner.go:217] Completed: docker container inspect kindnet-20220601193442-3412 --format={{.State.Status}}: (1.3813093s)
	I0601 19:57:12.040022   10744 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 19:57:12.040022   10744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 19:57:12.048002   10744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412
	I0601 19:57:12.164038   10744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 19:57:13.227479   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:13.309498   10744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1454008s)
	I0601 19:57:13.666894   10744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601193442-3412: (1.6188085s)
	I0601 19:57:13.667902   10744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61429 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-20220601193442-3412\id_rsa Username:docker}
	I0601 19:57:13.880393   10744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 19:57:14.922456   10744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.0420096s)
	I0601 19:57:14.927309   10744 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 19:57:14.932311   10744 addons.go:417] enableAddons completed in 5.8415511s
	I0601 19:57:15.715706   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:17.721747   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:20.210049   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:22.224186   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:24.225537   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:26.227104   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:28.711627   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:30.722171   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:33.219037   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:35.714303   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:37.727224   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:40.215325   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:42.218913   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:44.220751   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:46.232532   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:48.722948   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:51.220786   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:53.228880   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:55.723275   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:57:57.725782   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:00.224540   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:02.227564   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:04.719262   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:06.720823   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:08.727026   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:11.218349   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:13.718771   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:15.720314   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:17.722449   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:20.216924   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:22.226383   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:24.226641   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:26.732859   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:29.213050   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:31.224749   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:33.729371   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:36.216875   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:38.714183   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:40.718061   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:42.718716   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:45.224979   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:47.722113   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:50.223336   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:52.714992   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:54.721199   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:56.732243   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:58:58.739608   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:01.229303   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:03.232608   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:05.725837   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:08.226656   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:10.728499   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:13.220106   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:15.222196   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:17.228061   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:19.727302   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:22.239567   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:24.733266   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:26.734191   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:29.222430   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:31.719724   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:33.731739   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:36.224933   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:38.727287   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:41.229191   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:43.727847   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:46.225127   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:48.229054   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:50.729271   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:53.218247   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:55.227180   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 19:59:57.730904   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:00.223784   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:02.230701   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:04.239505   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:06.717693   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:08.727607   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:10.732779   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:13.226509   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:15.720623   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:17.723066   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:19.732947   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:21.764106   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:24.236972   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:26.241599   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:28.722986   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:30.727843   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:33.231108   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:35.234074   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:37.235188   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:39.724669   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:41.733529   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:44.222360   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:46.231253   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:48.232687   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:50.724138   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:52.733782   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:55.232804   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:57.240767   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:00:59.734465   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:01:01.734753   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:01:03.736682   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:01:05.737934   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:01:08.228296   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:01:10.236937   10744 node_ready.go:58] node "kindnet-20220601193442-3412" has status "Ready":"False"
	I0601 20:01:11.235091   10744 node_ready.go:38] duration metric: took 4m0.0719638s waiting for node "kindnet-20220601193442-3412" to be "Ready" ...
	I0601 20:01:11.240064   10744 out.go:177] 
	W0601 20:01:11.244082   10744 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 20:01:11.244082   10744 out.go:239] * 
	* 
	W0601 20:01:11.245075   10744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 20:01:11.249068   10744 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (367.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (352.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:00:36.957522    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 20:00:44.777469    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6085163s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5981695s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5633143s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6111016s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6415467s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6242326s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0601 20:02:26.700354    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:02:41.581600    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6602299s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0601 20:02:54.509654    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5229043s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6905355s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5725609s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5493779s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220601193442-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5985497s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (352.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (328.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6146388s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:09:42.400706    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:42.416302    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:42.431727    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:42.462785    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:42.510282    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:42.602674    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:42.772910    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:43.103642    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:43.747091    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:45.038232    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:47.614799    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:09:52.744759    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6304697s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:10:02.995880    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6470197s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0601 20:10:12.226327    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 20:10:13.531177    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:13.545906    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:13.561308    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:13.592278    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:13.640392    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:13.734304    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:13.905316    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:10:14.234748    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:14.879252    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:16.172005    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:18.737558    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:23.492751    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:10:23.866717    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5964636s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:10:34.111889    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
E0601 20:10:36.991802    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5434379s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5376586s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6308067s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0601 20:11:35.572924    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5712593s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5502782s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0601 20:12:26.398459    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:12:26.729436    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:12:57.506953    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5634112s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:13:49.909360    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5860193s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:175: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:180: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (328.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (354.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5632625s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7072044s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5824182s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5819428s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
E0601 20:12:41.601541    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7893989s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5792427s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.591247s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5402007s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6636589s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (111.3µs)
net_test.go:175: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:180: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (354.42s)
E0601 20:24:42.455339    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:24:55.561036    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 20:24:59.214595    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220601200535-3412\client.crt: The system cannot find the path specified.

                                                
                                    

Test pass (179/213)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.09
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.37
10 TestDownloadOnly/v1.23.6/json-events 13.61
11 TestDownloadOnly/v1.23.6/preload-exists 0
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.41
16 TestDownloadOnly/DeleteAll 11.6
17 TestDownloadOnly/DeleteAlwaysSucceeds 7.3
18 TestDownloadOnlyKic 45.54
19 TestBinaryMirror 16.72
20 TestOffline 569.75
22 TestAddons/Setup 385.87
26 TestAddons/parallel/MetricsServer 12.29
27 TestAddons/parallel/HelmTiller 32.77
29 TestAddons/parallel/CSI 77.88
31 TestAddons/serial/GCPAuth 26.69
32 TestAddons/StoppedEnableDisable 24.16
33 TestCertOptions 188.37
34 TestCertExpiration 718.78
35 TestDockerFlags 150.72
36 TestForceSystemdFlag 174.59
37 TestForceSystemdEnv 182.91
42 TestErrorSpam/setup 110.85
43 TestErrorSpam/start 21.77
44 TestErrorSpam/status 19.25
45 TestErrorSpam/pause 17.02
46 TestErrorSpam/unpause 17.31
47 TestErrorSpam/stop 32.72
50 TestFunctional/serial/CopySyncFile 0.03
51 TestFunctional/serial/StartWithProxy 127.54
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 33.53
54 TestFunctional/serial/KubeContext 0.24
55 TestFunctional/serial/KubectlGetPods 0.38
58 TestFunctional/serial/CacheCmd/cache/add_remote 17.94
59 TestFunctional/serial/CacheCmd/cache/add_local 8.97
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.34
61 TestFunctional/serial/CacheCmd/cache/list 0.32
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 6.22
63 TestFunctional/serial/CacheCmd/cache/cache_reload 24.16
64 TestFunctional/serial/CacheCmd/cache/delete 0.68
65 TestFunctional/serial/MinikubeKubectlCmd 2.03
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.98
67 TestFunctional/serial/ExtraConfig 68.71
68 TestFunctional/serial/ComponentHealth 0.3
69 TestFunctional/serial/LogsCmd 7.35
70 TestFunctional/serial/LogsFileCmd 8.71
72 TestFunctional/parallel/ConfigCmd 2.08
74 TestFunctional/parallel/DryRun 12.74
75 TestFunctional/parallel/InternationalLanguage 5.31
76 TestFunctional/parallel/StatusCmd 19.64
81 TestFunctional/parallel/AddonsCmd 3.32
82 TestFunctional/parallel/PersistentVolumeClaim 48.09
84 TestFunctional/parallel/SSHCmd 12.72
85 TestFunctional/parallel/CpCmd 24.59
86 TestFunctional/parallel/MySQL 75.42
87 TestFunctional/parallel/FileSync 6.34
88 TestFunctional/parallel/CertSync 44.27
92 TestFunctional/parallel/NodeLabels 0.31
94 TestFunctional/parallel/NonActiveRuntimeDisabled 6.54
96 TestFunctional/parallel/ProfileCmd/profile_not_create 9.56
97 TestFunctional/parallel/DockerEnv/powershell 25.64
98 TestFunctional/parallel/ProfileCmd/profile_list 6.84
99 TestFunctional/parallel/ProfileCmd/profile_json_output 6.83
100 TestFunctional/parallel/ImageCommands/ImageListShort 4.14
101 TestFunctional/parallel/ImageCommands/ImageListTable 4.15
102 TestFunctional/parallel/ImageCommands/ImageListJson 4.18
103 TestFunctional/parallel/ImageCommands/ImageListYaml 4.11
104 TestFunctional/parallel/ImageCommands/ImageBuild 17.92
105 TestFunctional/parallel/ImageCommands/Setup 5.57
106 TestFunctional/parallel/Version/short 0.32
107 TestFunctional/parallel/Version/components 5.66
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 19.86
109 TestFunctional/parallel/UpdateContextCmd/no_changes 3.9
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 3.94
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 3.94
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 16.99
113 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 24.81
114 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.12
115 TestFunctional/parallel/ImageCommands/ImageRemove 8.45
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.78
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 13.76
121 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 16.02
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/delete_addon-resizer_images 0.01
129 TestFunctional/delete_my-image_image 0.01
130 TestFunctional/delete_minikube_cached_images 0.01
133 TestIngressAddonLegacy/StartLegacyK8sCluster 132.43
135 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 39.1
136 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 4.55
140 TestJSONOutput/start/Command 127.12
141 TestJSONOutput/start/Audit 0
143 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
146 TestJSONOutput/pause/Command 5.94
147 TestJSONOutput/pause/Audit 0
149 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/unpause/Command 5.81
153 TestJSONOutput/unpause/Audit 0
155 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/stop/Command 17.76
159 TestJSONOutput/stop/Audit 0
161 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
163 TestErrorJSONOutput 7.17
165 TestKicCustomNetwork/create_custom_network 135.46
166 TestKicCustomNetwork/use_default_bridge_network 128.36
167 TestKicExistingNetwork 139.24
168 TestKicCustomSubnet 137.72
169 TestMainNoArgs 0.33
170 TestMinikubeProfile 286.74
173 TestMountStart/serial/StartWithMountFirst 48.86
174 TestMountStart/serial/VerifyMountFirst 6.12
175 TestMountStart/serial/StartWithMountSecond 50.84
176 TestMountStart/serial/VerifyMountSecond 6.14
177 TestMountStart/serial/DeleteFirst 19.3
178 TestMountStart/serial/VerifyMountPostDelete 6.21
179 TestMountStart/serial/Stop 8.54
180 TestMountStart/serial/RestartStopped 28.96
181 TestMountStart/serial/VerifyMountPostStop 6.15
184 TestMultiNode/serial/FreshStart2Nodes 250.69
185 TestMultiNode/serial/DeployApp2Nodes 25.07
186 TestMultiNode/serial/PingHostFrom2Pods 10.3
187 TestMultiNode/serial/AddNode 118.04
188 TestMultiNode/serial/ProfileList 6.38
189 TestMultiNode/serial/CopyFile 214.04
190 TestMultiNode/serial/StopNode 28.92
191 TestMultiNode/serial/StartAfterStop 60.38
192 TestMultiNode/serial/RestartKeepsNodes 185.53
193 TestMultiNode/serial/DeleteNode 43.33
194 TestMultiNode/serial/StopMultiNode 40.48
195 TestMultiNode/serial/RestartMultiNode 122.07
196 TestMultiNode/serial/ValidateNameConflict 143.23
200 TestPreload 344.3
201 TestScheduledStopWindows 217.7
205 TestInsufficientStorage 108.41
206 TestRunningBinaryUpgrade 343.11
208 TestKubernetesUpgrade 326.06
209 TestMissingContainerUpgrade 428.24
211 TestNoKubernetes/serial/StartNoK8sWithVersion 0.46
218 TestNoKubernetes/serial/StartWithK8s 143.94
224 TestNoKubernetes/serial/StartWithStopK8s 65.06
225 TestNoKubernetes/serial/Start 58.4
226 TestNoKubernetes/serial/VerifyK8sNotRunning 6.16
227 TestNoKubernetes/serial/ProfileList 23.37
229 TestStoppedBinaryUpgrade/Setup 0.44
230 TestStoppedBinaryUpgrade/Upgrade 426.33
231 TestStoppedBinaryUpgrade/MinikubeLogs 10.92
240 TestPause/serial/Start 155.48
241 TestNetworkPlugins/group/auto/Start 167.83
243 TestPause/serial/SecondStartNoReconfiguration 40.81
244 TestNetworkPlugins/group/auto/KubeletFlags 7.43
245 TestNetworkPlugins/group/auto/NetCatPod 19.9
246 TestPause/serial/Pause 7.29
247 TestNetworkPlugins/group/auto/DNS 0.64
248 TestNetworkPlugins/group/auto/Localhost 0.71
249 TestNetworkPlugins/group/auto/HairPin 5.61
250 TestPause/serial/VerifyStatus 7.49
251 TestPause/serial/Unpause 11.44
254 TestNetworkPlugins/group/false/Start 397.4
256 TestNetworkPlugins/group/false/KubeletFlags 7
257 TestNetworkPlugins/group/false/NetCatPod 21.15
259 TestNetworkPlugins/group/enable-default-cni/Start 389.57
260 TestNetworkPlugins/group/bridge/Start 136.77
261 TestNetworkPlugins/group/kubenet/Start 386.97
262 TestNetworkPlugins/group/bridge/KubeletFlags 6.67
263 TestNetworkPlugins/group/bridge/NetCatPod 24.8
264 TestNetworkPlugins/group/bridge/DNS 0.6
265 TestNetworkPlugins/group/bridge/Localhost 0.55
266 TestNetworkPlugins/group/bridge/HairPin 0.55
272 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 7.28
274 TestNetworkPlugins/group/enable-default-cni/NetCatPod 28.89
279 TestNetworkPlugins/group/kubenet/KubeletFlags 6.5
280 TestNetworkPlugins/group/kubenet/NetCatPod 19.97
x
+
TestDownloadOnly/v1.16.0/json-events (18.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220601174145-3412 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220601174145-3412 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (18.0909987s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220601174145-3412
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220601174145-3412: exit status 85 (374.0682ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 17:41:46
	Running on machine: minikube4
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 17:41:46.838999    7648 out.go:296] Setting OutFile to fd 648 ...
	I0601 17:41:46.890835    7648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 17:41:46.890835    7648 out.go:309] Setting ErrFile to fd 644...
	I0601 17:41:46.890835    7648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0601 17:41:46.904218    7648 root.go:300] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0601 17:41:46.907052    7648 out.go:303] Setting JSON to true
	I0601 17:41:46.910983    7648 start.go:115] hostinfo: {"hostname":"minikube4","uptime":65422,"bootTime":1654039884,"procs":162,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 17:41:46.910983    7648 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 17:41:46.949255    7648 out.go:97] [download-only-20220601174145-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 17:41:46.949479    7648 notify.go:193] Checking for updates...
	W0601 17:41:46.949479    7648 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0601 17:41:46.953681    7648 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 17:41:46.958276    7648 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 17:41:46.961077    7648 out.go:169] MINIKUBE_LOCATION=14079
	I0601 17:41:46.963630    7648 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0601 17:41:46.967188    7648 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 17:41:46.968152    7648 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 17:41:49.594278    7648 docker.go:137] docker version: linux-20.10.14
	I0601 17:41:49.602154    7648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 17:41:51.654868    7648 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0524935s)
	I0601 17:41:51.655683    7648 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 17:41:50.6161812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 17:41:51.683386    7648 out.go:97] Using the docker driver based on user configuration
	I0601 17:41:51.683746    7648 start.go:284] selected driver: docker
	I0601 17:41:51.684268    7648 start.go:806] validating driver "docker" against <nil>
	I0601 17:41:51.704731    7648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 17:41:53.757854    7648 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0518347s)
	I0601 17:41:53.758143    7648 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 17:41:52.7047639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 17:41:53.758465    7648 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 17:41:53.885191    7648 start_flags.go:373] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0601 17:41:53.885798    7648 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 17:41:53.899407    7648 out.go:169] Using Docker Desktop driver with the root privilege
	I0601 17:41:53.902014    7648 cni.go:95] Creating CNI manager for ""
	I0601 17:41:53.902742    7648 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 17:41:53.902742    7648 start_flags.go:306] config:
	{Name:download-only-20220601174145-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601174145-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 17:41:53.905415    7648 out.go:97] Starting control plane node download-only-20220601174145-3412 in cluster download-only-20220601174145-3412
	I0601 17:41:53.905544    7648 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 17:41:53.907670    7648 out.go:97] Pulling base image ...
	I0601 17:41:53.907799    7648 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 17:41:53.907799    7648 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 17:41:53.948084    7648 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 17:41:53.948127    7648 cache.go:57] Caching tarball of preloaded images
	I0601 17:41:53.948661    7648 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 17:41:53.951297    7648 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0601 17:41:53.951297    7648 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 17:41:54.028208    7648 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 17:41:54.980426    7648 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 17:41:54.980426    7648 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 17:41:54.980426    7648 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 17:41:54.980426    7648 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 17:41:54.981428    7648 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 17:41:56.548871    7648 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 17:41:56.549393    7648 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 17:41:57.543324    7648 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 17:41:57.544215    7648 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-20220601174145-3412\config.json ...
	I0601 17:41:57.544215    7648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-20220601174145-3412\config.json: {Name:mk81e8392254f16bc59695ca1cc8f75bf799a083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 17:41:57.545064    7648 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 17:41:57.546463    7648 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601174145-3412"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (13.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220601174145-3412 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220601174145-3412 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker: (13.6077256s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (13.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220601174145-3412
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220601174145-3412: exit status 85 (409.0334ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 17:42:03
	Running on machine: minikube4
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 17:42:03.839245     256 out.go:296] Setting OutFile to fd 696 ...
	I0601 17:42:03.892389     256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 17:42:03.892389     256 out.go:309] Setting ErrFile to fd 700...
	I0601 17:42:03.893251     256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0601 17:42:03.905245     256 root.go:300] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0601 17:42:03.906239     256 out.go:303] Setting JSON to true
	I0601 17:42:03.909241     256 start.go:115] hostinfo: {"hostname":"minikube4","uptime":65439,"bootTime":1654039884,"procs":163,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 17:42:03.909241     256 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 17:42:03.914254     256 out.go:97] [download-only-20220601174145-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 17:42:03.914254     256 notify.go:193] Checking for updates...
	I0601 17:42:03.916242     256 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 17:42:03.919243     256 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 17:42:03.921269     256 out.go:169] MINIKUBE_LOCATION=14079
	I0601 17:42:03.924261     256 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0601 17:42:03.930249     256 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 17:42:03.931250     256 config.go:178] Loaded profile config "download-only-20220601174145-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0601 17:42:03.931250     256 start.go:714] api.Load failed for download-only-20220601174145-3412: filestore "download-only-20220601174145-3412": Docker machine "download-only-20220601174145-3412" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 17:42:03.931250     256 driver.go:358] Setting default libvirt URI to qemu:///system
	W0601 17:42:03.932250     256 start.go:714] api.Load failed for download-only-20220601174145-3412: filestore "download-only-20220601174145-3412": Docker machine "download-only-20220601174145-3412" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 17:42:06.557023     256 docker.go:137] docker version: linux-20.10.14
	I0601 17:42:06.567658     256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 17:42:08.689246     256 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1214775s)
	I0601 17:42:08.689246     256 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 17:42:07.603115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 17:42:08.693788     256 out.go:97] Using the docker driver based on existing profile
	I0601 17:42:08.693788     256 start.go:284] selected driver: docker
	I0601 17:42:08.693788     256 start.go:806] validating driver "docker" against &{Name:download-only-20220601174145-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601174145-3412 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 17:42:08.717442     256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 17:42:10.791301     256 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.073556s)
	I0601 17:42:10.791425     256 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 17:42:09.7703298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 17:42:10.841339     256 cni.go:95] Creating CNI manager for ""
	I0601 17:42:10.841339     256 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 17:42:10.841339     256 start_flags.go:306] config:
	{Name:download-only-20220601174145-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220601174145-3412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 17:42:10.845890     256 out.go:97] Starting control plane node download-only-20220601174145-3412 in cluster download-only-20220601174145-3412
	I0601 17:42:10.845890     256 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 17:42:10.850348     256 out.go:97] Pulling base image ...
	I0601 17:42:10.850348     256 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 17:42:10.850348     256 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 17:42:10.901787     256 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 17:42:10.901787     256 cache.go:57] Caching tarball of preloaded images
	I0601 17:42:10.901787     256 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 17:42:10.930337     256 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0601 17:42:10.931187     256 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0601 17:42:11.001693     256 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 17:42:12.004577     256 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 17:42:12.004577     256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 17:42:12.004577     256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 17:42:12.004577     256 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 17:42:12.005209     256 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 17:42:12.005209     256 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 17:42:12.005209     256 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 17:42:13.757610     256 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0601 17:42:13.758157     256 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601174145-3412"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (11.6s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (11.5971775s)
--- PASS: TestDownloadOnly/DeleteAll (11.60s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (7.3s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220601174145-3412
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220601174145-3412: (7.2956803s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (7.30s)

                                                
                                    
x
+
TestDownloadOnlyKic (45.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220601174243-3412 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220601174243-3412 --force --alsologtostderr --driver=docker: (36.3107122s)
helpers_test.go:175: Cleaning up "download-docker-20220601174243-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220601174243-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220601174243-3412: (8.0972513s)
--- PASS: TestDownloadOnlyKic (45.54s)

                                                
                                    
x
+
TestBinaryMirror (16.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220601174329-3412 --alsologtostderr --binary-mirror http://127.0.0.1:57865 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220601174329-3412 --alsologtostderr --binary-mirror http://127.0.0.1:57865 --driver=docker: (8.2524151s)
helpers_test.go:175: Cleaning up "binary-mirror-20220601174329-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220601174329-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220601174329-3412: (8.2151433s)
--- PASS: TestBinaryMirror (16.72s)

                                                
                                    
x
+
TestOffline (569.75s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220601193434-3412 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20220601193434-3412 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (9m3.4687152s)
helpers_test.go:175: Cleaning up "offline-docker-20220601193434-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220601193434-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220601193434-3412: (26.2809917s)
--- PASS: TestOffline (569.75s)

                                                
                                    
x
+
TestAddons/Setup (385.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220601174345-3412 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20220601174345-3412 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m25.8675901s)
--- PASS: TestAddons/Setup (385.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 24.9894ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-gwmcp" [5a0ca3fe-48a1-4c0a-b8a6-e91f7c58fe4f] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0344975s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220601174345-3412 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable metrics-server --alsologtostderr -v=1: (6.8437234s)
--- PASS: TestAddons/parallel/MetricsServer (12.29s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (32.77s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 24.9894ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-dnf9g" [3c1667af-bbba-4102-bdd7-d363e3bff09f] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0369133s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220601174345-3412 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220601174345-3412 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (21.7817923s)
addons_test.go:428: kubectl --context addons-20220601174345-3412 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:440: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:440: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable helm-tiller --alsologtostderr -v=1: (5.9035704s)
--- PASS: TestAddons/parallel/HelmTiller (32.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (77.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 26.3132ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220601174345-3412 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: (dbg) Done: kubectl --context addons-20220601174345-3412 create -f testdata\csi-hostpath-driver\pvc.yaml: (4.7367252s)
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601174345-3412 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601174345-3412 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220601174345-3412 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [545c115b-b7c4-4195-be1c-a107b20c615a] Pending
helpers_test.go:342: "task-pv-pod" [545c115b-b7c4-4195-be1c-a107b20c615a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [545c115b-b7c4-4195-be1c-a107b20c615a] Running
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 32.1948656s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220601174345-3412 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601174345-3412 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601174345-3412 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220601174345-3412 delete pod task-pv-pod
addons_test.go:544: (dbg) Done: kubectl --context addons-20220601174345-3412 delete pod task-pv-pod: (1.5043359s)
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220601174345-3412 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220601174345-3412 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601174345-3412 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220601174345-3412 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [b3a360b2-abff-4687-990c-eeaa684988d7] Pending
helpers_test.go:342: "task-pv-pod-restore" [b3a360b2-abff-4687-990c-eeaa684988d7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [b3a360b2-abff-4687-990c-eeaa684988d7] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.0361973s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220601174345-3412 delete pod task-pv-pod-restore
addons_test.go:576: (dbg) Done: kubectl --context addons-20220601174345-3412 delete pod task-pv-pod-restore: (1.6687467s)
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220601174345-3412 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220601174345-3412 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable csi-hostpath-driver --alsologtostderr -v=1: (13.6550666s)
addons_test.go:592: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:592: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable volumesnapshots --alsologtostderr -v=1: (5.5563313s)
--- PASS: TestAddons/parallel/CSI (77.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (26.69s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220601174345-3412 create -f testdata\busybox.yaml
addons_test.go:603: (dbg) Done: kubectl --context addons-20220601174345-3412 create -f testdata\busybox.yaml: (1.4009293s)
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e1a5f34a-76f3-4f53-9641-f618d7c6380c] Pending
helpers_test.go:342: "busybox" [e1a5f34a-76f3-4f53-9641-f618d7c6380c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e1a5f34a-76f3-4f53-9641-f618d7c6380c] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.0216404s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220601174345-3412 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220601174345-3412 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220601174345-3412 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220601174345-3412 addons disable gcp-auth --alsologtostderr -v=1: (13.6909557s)
--- PASS: TestAddons/serial/GCPAuth (26.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (24.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20220601174345-3412
addons_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20220601174345-3412: (18.4164176s)
addons_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220601174345-3412
addons_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220601174345-3412: (2.8646404s)
addons_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220601174345-3412
addons_test.go:140: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220601174345-3412: (2.8757515s)
--- PASS: TestAddons/StoppedEnableDisable (24.16s)

                                                
                                    
x
+
TestCertOptions (188.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220601194744-3412 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20220601194744-3412 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (2m19.4649556s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220601194744-3412 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0601 19:50:12.174652    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20220601194744-3412 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (11.3817688s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220601194744-3412 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-20220601194744-3412 -- "sudo cat /etc/kubernetes/admin.conf": (6.6626471s)
helpers_test.go:175: Cleaning up "cert-options-20220601194744-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220601194744-3412
E0601 19:50:36.920746    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220601194744-3412: (28.659933s)
--- PASS: TestCertOptions (188.37s)

                                                
                                    
x
+
TestCertExpiration (718.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220601193729-3412 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220601193729-3412 --memory=2048 --cert-expiration=3m --driver=docker: (7m48.5360139s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220601193729-3412 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220601193729-3412 --memory=2048 --cert-expiration=8760h --driver=docker: (44.1549824s)
helpers_test.go:175: Cleaning up "cert-expiration-20220601193729-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220601193729-3412

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220601193729-3412: (26.0726578s)
--- PASS: TestCertExpiration (718.78s)

                                                
                                    
x
+
TestDockerFlags (150.72s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220601193754-3412 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20220601193754-3412 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m55.2624583s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220601193754-3412 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220601193754-3412 ssh "sudo systemctl show docker --property=Environment --no-pager": (6.3725351s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220601193754-3412 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220601193754-3412 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (6.5652869s)
helpers_test.go:175: Cleaning up "docker-flags-20220601193754-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220601193754-3412

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220601193754-3412: (22.5219286s)
--- PASS: TestDockerFlags (150.72s)

                                                
                                    
x
+
TestForceSystemdFlag (174.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220601193434-3412 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220601193434-3412 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m19.813757s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220601193434-3412 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20220601193434-3412 ssh "docker info --format {{.CgroupDriver}}": (7.808934s)
helpers_test.go:175: Cleaning up "force-systemd-flag-20220601193434-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220601193434-3412

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220601193434-3412: (26.9618174s)
--- PASS: TestForceSystemdFlag (174.59s)

                                                
                                    
x
+
TestForceSystemdEnv (182.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220601193451-3412 --memory=2048 --alsologtostderr -v=5 --driver=docker
E0601 19:34:55.389990    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 19:35:12.118751    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 19:35:20.079968    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 19:35:36.880417    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20220601193451-3412 --memory=2048 --alsologtostderr -v=5 --driver=docker: (2m31.624115s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220601193451-3412 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20220601193451-3412 ssh "docker info --format {{.CgroupDriver}}": (7.3788988s)
helpers_test.go:175: Cleaning up "force-systemd-env-20220601193451-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220601193451-3412

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220601193451-3412: (23.9056856s)
--- PASS: TestForceSystemdEnv (182.91s)

                                                
                                    
x
+
TestErrorSpam/setup (110.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220601175256-3412 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 --driver=docker
error_spam_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20220601175256-3412 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 --driver=docker: (1m50.8475768s)
error_spam_test.go:88: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6."
--- PASS: TestErrorSpam/setup (110.85s)

                                                
                                    
x
+
TestErrorSpam/start (21.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 start --dry-run: (7.389939s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 start --dry-run: (7.2318112s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 start --dry-run
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 start --dry-run: (7.1460515s)
--- PASS: TestErrorSpam/start (21.77s)

                                                
                                    
x
+
TestErrorSpam/status (19.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 status
E0601 17:55:11.818056    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:11.833024    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:11.848327    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:11.879296    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:11.926333    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:12.020703    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:12.194163    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:12.527892    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:13.174635    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 17:55:14.465887    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 status: (6.4593118s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 status
E0601 17:55:17.041122    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 status: (6.4471062s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 status
E0601 17:55:22.173912    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 status: (6.3421817s)
--- PASS: TestErrorSpam/status (19.25s)

                                                
                                    
x
+
TestErrorSpam/pause (17.02s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 pause
E0601 17:55:32.426606    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 pause: (6.1464401s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 pause: (5.4118574s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 pause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 pause: (5.4609199s)
--- PASS: TestErrorSpam/pause (17.02s)

                                                
                                    
x
+
TestErrorSpam/unpause (17.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 unpause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 unpause: (6.0931395s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 unpause
E0601 17:55:52.916673    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 unpause: (5.5621657s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 unpause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 unpause: (5.6508072s)
--- PASS: TestErrorSpam/unpause (17.31s)

                                                
                                    
x
+
TestErrorSpam/stop (32.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 stop: (17.8398901s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 stop: (7.4471513s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 stop
E0601 17:56:33.891587    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601175256-3412 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220601175256-3412 stop: (7.4282071s)
--- PASS: TestErrorSpam/stop (32.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\3412\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (127.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0601 17:57:55.817128    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
functional_test.go:2160: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (2m7.5381457s)
--- PASS: TestFunctional/serial/StartWithProxy (127.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --alsologtostderr -v=8: (33.5303283s)
functional_test.go:655: soft start took 33.5323317s for "functional-20220601175654-3412" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.24s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220601175654-3412 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (17.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache add k8s.gcr.io/pause:3.1: (5.9336347s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache add k8s.gcr.io/pause:3.3: (5.9001457s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache add k8s.gcr.io/pause:latest: (6.1079101s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (17.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (8.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220601175654-3412 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local339830891\001
functional_test.go:1069: (dbg) Done: docker build -t minikube-local-cache-test:functional-20220601175654-3412 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local339830891\001: (2.2352883s)
functional_test.go:1081: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache add minikube-local-cache-test:functional-20220601175654-3412
functional_test.go:1081: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache add minikube-local-cache-test:functional-20220601175654-3412: (5.3155779s)
functional_test.go:1086: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache delete minikube-local-cache-test:functional-20220601175654-3412
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220601175654-3412
functional_test.go:1075: (dbg) Done: docker rmi minikube-local-cache-test:functional-20220601175654-3412: (1.0544903s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (8.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh sudo crictl images
functional_test.go:1116: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh sudo crictl images: (6.2232434s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (24.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh sudo docker rmi k8s.gcr.io/pause:latest
E0601 18:00:11.818728    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
functional_test.go:1139: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh sudo docker rmi k8s.gcr.io/pause:latest: (6.197325s)
functional_test.go:1145: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (6.2253626s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cache reload: (5.520677s)
functional_test.go:1155: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1155: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (6.2182348s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (24.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.68s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 kubectl -- --context functional-20220601175654-3412 get pods
functional_test.go:708: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 kubectl -- --context functional-20220601175654-3412 get pods: (2.0331435s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.03s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out\kubectl.exe --context functional-20220601175654-3412 get pods
functional_test.go:733: (dbg) Done: out\kubectl.exe --context functional-20220601175654-3412 get pods: (1.9710123s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.98s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (68.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0601 18:00:39.671577    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
functional_test.go:749: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m8.7127188s)
functional_test.go:753: restart took 1m8.7128297s for "functional-20220601175654-3412" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (68.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220601175654-3412 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 logs
functional_test.go:1228: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 logs: (7.3531534s)
--- PASS: TestFunctional/serial/LogsCmd (7.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (8.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1561707606\001\logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1561707606\001\logs.txt: (8.705582s)
--- PASS: TestFunctional/serial/LogsFileCmd (8.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 config get cpus: exit status 14 (329.6624ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 config get cpus: exit status 14 (324.131ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (12.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.2151524s)

                                                
                                                
-- stdout --
	* [functional-20220601175654-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 18:02:21.801228    6960 out.go:296] Setting OutFile to fd 644 ...
	I0601 18:02:21.860460    6960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 18:02:21.860460    6960 out.go:309] Setting ErrFile to fd 808...
	I0601 18:02:21.860460    6960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 18:02:21.873139    6960 out.go:303] Setting JSON to false
	I0601 18:02:21.875147    6960 start.go:115] hostinfo: {"hostname":"minikube4","uptime":66657,"bootTime":1654039884,"procs":171,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 18:02:21.876139    6960 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 18:02:21.880332    6960 out.go:177] * [functional-20220601175654-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 18:02:21.883283    6960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 18:02:21.885846    6960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 18:02:21.887999    6960 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 18:02:21.890018    6960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 18:02:21.892876    6960 config.go:178] Loaded profile config "functional-20220601175654-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 18:02:21.893938    6960 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 18:02:24.542213    6960 docker.go:137] docker version: linux-20.10.14
	I0601 18:02:24.549181    6960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 18:02:26.641002    6960 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0916788s)
	I0601 18:02:26.641411    6960 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-01 18:02:25.5934352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 18:02:26.645655    6960 out.go:177] * Using the docker driver based on existing profile
	I0601 18:02:26.647206    6960 start.go:284] selected driver: docker
	I0601 18:02:26.647206    6960 start.go:806] validating driver "docker" against &{Name:functional-20220601175654-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601175654-3412 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 18:02:26.647804    6960 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 18:02:26.733901    6960 out.go:177] 
	W0601 18:02:26.735798    6960 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0601 18:02:26.739803    6960 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --dry-run --alsologtostderr -v=1 --driver=docker: (7.522168s)
--- PASS: TestFunctional/parallel/DryRun (12.74s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220601175654-3412 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.30943s)

                                                
                                                
-- stdout --
	* [functional-20220601175654-3412] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 18:02:16.504709    5092 out.go:296] Setting OutFile to fd 900 ...
	I0601 18:02:16.563386    5092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 18:02:16.563386    5092 out.go:309] Setting ErrFile to fd 640...
	I0601 18:02:16.563386    5092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 18:02:16.573765    5092 out.go:303] Setting JSON to false
	I0601 18:02:16.578603    5092 start.go:115] hostinfo: {"hostname":"minikube4","uptime":66651,"bootTime":1654039885,"procs":170,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0601 18:02:16.578739    5092 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 18:02:16.583484    5092 out.go:177] * [functional-20220601175654-3412] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 18:02:16.586417    5092 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0601 18:02:16.588600    5092 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0601 18:02:16.591344    5092 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 18:02:16.594473    5092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 18:02:16.598046    5092 config.go:178] Loaded profile config "functional-20220601175654-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 18:02:16.598895    5092 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 18:02:19.297754    5092 docker.go:137] docker version: linux-20.10.14
	I0601 18:02:19.304752    5092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 18:02:21.396262    5092 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0914018s)
	I0601 18:02:21.396262    5092 info.go:265] docker info: {ID:HYM5:BYAO:UV7O:PMOX:FKBT:EHZV:JC4K:SJUL:2DOV:HANY:W6KD:7EVC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-01 18:02:20.3568668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 18:02:21.401293    5092 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0601 18:02:21.403414    5092 start.go:284] selected driver: docker
	I0601 18:02:21.403486    5092 start.go:806] validating driver "docker" against &{Name:functional-20220601175654-3412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601175654-3412 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 18:02:21.403747    5092 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 18:02:21.521702    5092 out.go:177] 
	W0601 18:02:21.523708    5092 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0601 18:02:21.525702    5092 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (5.31s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (19.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 status: (6.498078s)
functional_test.go:852: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (6.5306732s)
functional_test.go:864: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 status -o json: (6.6077307s)
--- PASS: TestFunctional/parallel/StatusCmd (19.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 addons list: (2.9871058s)
functional_test.go:1631: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [2c9fa5a6-3da0-4755-8028-f65fd62e909b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0932921s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220601175654-3412 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220601175654-3412 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220601175654-3412 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601175654-3412 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [3a5da3c5-8277-4bc6-b783-7051fd58f871] Pending
helpers_test.go:342: "sp-pod" [3a5da3c5-8277-4bc6-b783-7051fd58f871] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [3a5da3c5-8277-4bc6-b783-7051fd58f871] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.1490091s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220601175654-3412 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220601175654-3412 delete -f testdata/storage-provisioner/pod.yaml: (4.4721423s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601175654-3412 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [19b42d3f-c937-4f5e-8ce0-b0d8533ad3bd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0617653s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (12.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "echo hello": (6.4127955s)
functional_test.go:1671: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "cat /etc/hostname": (6.3070575s)
--- PASS: TestFunctional/parallel/SSHCmd (12.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (24.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cp testdata\cp-test.txt /home/docker/cp-test.txt: (5.5182698s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh -n functional-20220601175654-3412 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh -n functional-20220601175654-3412 "sudo cat /home/docker/cp-test.txt": (6.4364993s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cp functional-20220601175654-3412:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3371800001\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 cp functional-20220601175654-3412:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3371800001\001\cp-test.txt: (6.2749353s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh -n functional-20220601175654-3412 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh -n functional-20220601175654-3412 "sudo cat /home/docker/cp-test.txt": (6.3603117s)
--- PASS: TestFunctional/parallel/CpCmd (24.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (75.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220601175654-3412 replace --force -f testdata\mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-9rpl2" [52bc4ab0-4cc0-47ff-b0d9-69fb3a26a3e3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-9rpl2" [52bc4ab0-4cc0-47ff-b0d9-69fb3a26a3e3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 48.1425005s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;": exit status 1 (565.0772ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;": exit status 1 (515.1715ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;": exit status 1 (680.0526ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;": exit status 1 (563.7915ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;": exit status 1 (481.6261ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;": exit status 1 (481.5284ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Done: kubectl --context functional-20220601175654-3412 exec mysql-b87c45988-9rpl2 -- mysql -ppassword -e "show databases;": (1.4337404s)
--- PASS: TestFunctional/parallel/MySQL (75.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (6.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/3412/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/test/nested/copy/3412/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/test/nested/copy/3412/hosts": (6.3404016s)
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (6.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (44.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/3412.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/ssl/certs/3412.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/ssl/certs/3412.pem": (6.5917253s)
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/3412.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /usr/share/ca-certificates/3412.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /usr/share/ca-certificates/3412.pem": (7.5815456s)
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/ssl/certs/51391683.0": (7.6332052s)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/34122.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/ssl/certs/34122.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/ssl/certs/34122.pem": (7.4623865s)
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/34122.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /usr/share/ca-certificates/34122.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /usr/share/ca-certificates/34122.pem": (6.8318174s)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (8.1686154s)
--- PASS: TestFunctional/parallel/CertSync (44.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220601175654-3412 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh "sudo systemctl is-active crio": exit status 1 (6.5351456s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.0399894s)
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.5203867s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (25.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220601175654-3412 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220601175654-3412"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220601175654-3412 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220601175654-3412": (15.5544789s)
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220601175654-3412 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220601175654-3412 docker-env | Invoke-Expression ; docker images": (10.0735555s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (25.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (6.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Done: out/minikube-windows-amd64.exe profile list: (6.4992452s)
functional_test.go:1310: Took "6.4992452s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1324: Took "345.6877ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (6.84s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (6.4633322s)
functional_test.go:1361: Took "6.4633322s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1374: Took "360.9036ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (6.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format short: (4.1377275s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20220601175654-3412
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220601175654-3412
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (4.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format table: (4.1489772s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220601175654-3412 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest                         | 0e901e68141fd | 142MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                        | 595f327f224a4 | 53.5MB |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                        | df7b72818ad2e | 125MB  |
| gcr.io/k8s-minikube/busybox                 | latest                         | beae173ccac6a | 1.24MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-20220601175654-3412 | b2c8f0f13eca1 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-20220601175654-3412 | 66713ec6949b7 | 30B    |
| docker.io/library/nginx                     | alpine                         | b1c3acb288825 | 23.4MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| docker.io/library/mysql                     | 5.7                            | 2a0961b7de03c | 462MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                        | 8fa62c12256df | 135MB  |
| k8s.gcr.io/kube-proxy                       | v1.23.6                        | 4c03754524064 | 112MB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format json: (4.1842509s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format json:
[{"id":"66713ec6949b7af560ac6595c64a57cb2292005b92cdf66f28c8a2893f4756bd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220601175654-3412"],"size":"30"},{"id":"0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"125000000"},{"id":"595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"53500000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"da86e6ba6ca197bf6bc5e9d900
febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"b2c8f0f13eca186ecb1340b0a3d1dc58c043107df5355492cbbb4e9bc2afc06f","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220601175654-3412"],"size":"1240000"},{"id":"8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size
":"135000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220601175654-3412"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"112000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format yaml
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format yaml: (4.1136182s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls --format yaml:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 66713ec6949b7af560ac6595c64a57cb2292005b92cdf66f28c8a2893f4756bd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220601175654-3412
size: "30"
- id: 8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "135000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220601175654-3412
size: "32900000"
- id: df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "125000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "112000000"
- id: 595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "53500000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (17.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 ssh pgrep buildkitd: exit status 1 (6.2979705s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image build -t localhost/my-image:functional-20220601175654-3412 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image build -t localhost/my-image:functional-20220601175654-3412 testdata\build: (7.488052s)
functional_test.go:315: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image build -t localhost/my-image:functional-20220601175654-3412 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 815eebbfb593
Removing intermediate container 815eebbfb593
---> cdc4742df38a
Step 3/3 : ADD content.txt /
---> b2c8f0f13eca
Successfully built b2c8f0f13eca
Successfully tagged localhost/my-image:functional-20220601175654-3412
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls: (4.1327713s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (17.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.4104219s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220601175654-3412
functional_test.go:342: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (1.1401178s)
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 version --short
--- PASS: TestFunctional/parallel/Version/short (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (5.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 version -o=json --components: (5.6639911s)
--- PASS: TestFunctional/parallel/Version/components (5.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601175654-3412

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (13.7198097s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls: (6.1373438s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 update-context --alsologtostderr -v=2: (3.8974694s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 update-context --alsologtostderr -v=2: (3.9407332s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 update-context --alsologtostderr -v=2: (3.9325567s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (16.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601175654-3412

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (11.996454s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls: (4.9887271s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (16.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.2652767s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220601175654-3412
functional_test.go:235: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (1.1307735s)
functional_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601175654-3412

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (15.0853772s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls: (4.304506s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image save gcr.io/google-containers/addon-resizer:functional-20220601175654-3412 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image save gcr.io/google-containers/addon-resizer:functional-20220601175654-3412 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (8.1168004s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image rm gcr.io/google-containers/addon-resizer:functional-20220601175654-3412

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image rm gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (4.2053182s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls: (4.2397224s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220601175654-3412 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220601175654-3412 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [8c858380-b488-47d1-bbd7-fac35eefeb51] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [8c858380-b488-47d1-bbd7-fac35eefeb51] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0752112s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (13.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (9.4336797s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image ls: (4.3302516s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (13.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (16.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220601175654-3412

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Done: docker rmi gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (1.1266615s)
functional_test.go:419: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601175654-3412

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601175654-3412 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (13.7889939s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220601175654-3412

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:424: (dbg) Done: docker image inspect gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: (1.0813535s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (16.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220601175654-3412 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 4348: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601175654-3412
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601175654-3412: context deadline exceeded (0s)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:functional-20220601175654-3412" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601175654-3412": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.01s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220601175654-3412
functional_test.go:193: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-20220601175654-3412: context deadline exceeded (583.4µs)
functional_test.go:195: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-20220601175654-3412": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220601175654-3412
functional_test.go:201: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-20220601175654-3412: context deadline exceeded (0s)
functional_test.go:203: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-20220601175654-3412": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (132.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220601183740-3412 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0601 18:37:41.319109    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:41.334954    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:41.350742    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:41.382045    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:41.428529    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:41.520925    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:41.696016    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:42.031244    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:42.682299    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:43.966695    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:46.530977    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:37:51.652656    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:38:01.904653    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:38:22.391322    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:39:03.356520    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220601183740-3412 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (2m12.434666s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (132.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (39.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601183740-3412 addons enable ingress --alsologtostderr -v=5
E0601 18:40:11.949650    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:40:25.294766    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601183740-3412 addons enable ingress --alsologtostderr -v=5: (39.0953446s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (39.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601183740-3412 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601183740-3412 addons enable ingress-dns --alsologtostderr -v=5: (4.5486737s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (127.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220601184142-3412 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0601 18:42:41.327638    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:43:09.152135    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20220601184142-3412 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (2m7.1203254s)
--- PASS: TestJSONOutput/start/Command (127.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (5.94s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220601184142-3412 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20220601184142-3412 --output=json --user=testUser: (5.9391219s)
--- PASS: TestJSONOutput/pause/Command (5.94s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220601184142-3412 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20220601184142-3412 --output=json --user=testUser: (5.8112947s)
--- PASS: TestJSONOutput/unpause/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (17.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220601184142-3412 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20220601184142-3412 --output=json --user=testUser: (17.7549029s)
--- PASS: TestJSONOutput/stop/Command (17.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.17s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220601184438-3412 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220601184438-3412 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (358.1554ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6201d56-4310-48c2-81a9-4b4039e596bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220601184438-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ea33b63-e99f-4b83-b313-1f708d2b3cc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"0e089cf4-47a0-4809-a33b-cd0fbf5cd86a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"d00f0883-8277-49a9-ac19-31b710711d81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"f5764fc4-e6fc-4550-bc72-4d23a6584a7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad86de8b-95c8-4d9f-a9d0-46680ce9476e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220601184438-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220601184438-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220601184438-3412: (6.8115053s)
--- PASS: TestErrorJSONOutput (7.17s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (135.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220601184445-3412 --network=
E0601 18:44:55.207736    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:45:11.969498    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:45:36.716180    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:36.731279    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:36.746949    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:36.778844    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:36.826203    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:36.922266    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:37.096427    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:37.425606    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:38.073749    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:39.357096    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:41.930623    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:47.061199    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:45:57.315546    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:46:17.800904    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220601184445-3412 --network=: (1m53.7324615s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0506289s)
helpers_test.go:175: Cleaning up "docker-network-20220601184445-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220601184445-3412
E0601 18:46:58.777954    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220601184445-3412: (20.6633337s)
--- PASS: TestKicCustomNetwork/create_custom_network (135.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (128.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220601184701-3412 --network=bridge
E0601 18:47:41.336919    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:48:20.708941    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220601184701-3412 --network=bridge: (1m51.0475518s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.038914s)
helpers_test.go:175: Cleaning up "docker-network-20220601184701-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220601184701-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220601184701-3412: (16.2626104s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (128.36s)

                                                
                                    
x
+
TestKicExistingNetwork (139.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0377313s)
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20220601184914-3412 --network=existing-network
E0601 18:50:11.981547    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:50:36.744118    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 18:51:04.564300    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20220601184914-3412 --network=existing-network: (1m52.0680221s)
helpers_test.go:175: Cleaning up "existing-network-20220601184914-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20220601184914-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20220601184914-3412: (20.7578492s)
--- PASS: TestKicExistingNetwork (139.24s)

                                                
                                    
x
+
TestKicCustomSubnet (137.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220601185128-3412 --subnet=192.168.60.0/24
E0601 18:52:41.349484    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220601185128-3412 --subnet=192.168.60.0/24: (1m56.0534158s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220601185128-3412 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Done: docker network inspect custom-subnet-20220601185128-3412 --format "{{(index .IPAM.Config 0).Subnet}}": (1.0373619s)
helpers_test.go:175: Cleaning up "custom-subnet-20220601185128-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220601185128-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220601185128-3412: (20.6228914s)
--- PASS: TestKicCustomSubnet (137.72s)

                                                
                                    
x
+
TestMainNoArgs (0.33s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.33s)

                                                
                                    
x
+
TestMinikubeProfile (286.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-20220601185347-3412 --driver=docker
E0601 18:54:04.550493    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 18:55:12.002685    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:55:36.755261    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-20220601185347-3412 --driver=docker: (1m54.2037072s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-20220601185347-3412 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-20220601185347-3412 --driver=docker: (1m45.5598848s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-20220601185347-3412
minikube_profile_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe profile first-20220601185347-3412: (2.843918s)
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (10.0389136s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-20220601185347-3412
E0601 18:57:41.372833    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
minikube_profile_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe profile second-20220601185347-3412: (2.824005s)
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (9.5462053s)
helpers_test.go:175: Cleaning up "second-20220601185347-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-20220601185347-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-20220601185347-3412: (21.1515551s)
helpers_test.go:175: Cleaning up "first-20220601185347-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-20220601185347-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-20220601185347-3412: (20.5733176s)
--- PASS: TestMinikubeProfile (286.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (48.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220601185833-3412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-20220601185833-3412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (47.8475371s)
--- PASS: TestMountStart/serial/StartWithMountFirst (48.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (6.12s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-20220601185833-3412 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-20220601185833-3412 ssh -- ls /minikube-host: (6.1199475s)
--- PASS: TestMountStart/serial/VerifyMountFirst (6.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (50.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220601185833-3412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E0601 19:00:12.015465    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220601185833-3412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (49.8364617s)
--- PASS: TestMountStart/serial/StartWithMountSecond (50.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (6.14s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220601185833-3412 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220601185833-3412 ssh -- ls /minikube-host: (6.1357891s)
--- PASS: TestMountStart/serial/VerifyMountSecond (6.14s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (19.3s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-20220601185833-3412 --alsologtostderr -v=5
E0601 19:00:36.777714    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-20220601185833-3412 --alsologtostderr -v=5: (19.2949617s)
--- PASS: TestMountStart/serial/DeleteFirst (19.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (6.21s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220601185833-3412 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220601185833-3412 ssh -- ls /minikube-host: (6.2101212s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (6.21s)

                                                
                                    
x
+
TestMountStart/serial/Stop (8.54s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-20220601185833-3412
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-20220601185833-3412: (8.5430836s)
--- PASS: TestMountStart/serial/Stop (8.54s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (28.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220601185833-3412
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220601185833-3412: (27.9501864s)
--- PASS: TestMountStart/serial/RestartStopped (28.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (6.15s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220601185833-3412 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220601185833-3412 ssh -- ls /minikube-host: (6.1447005s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (6.15s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (250.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0601 19:01:59.963640    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 19:02:41.390246    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 19:05:12.023942    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 19:05:36.778253    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (4m0.9572244s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr: (9.7309794s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (250.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (25.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.5077741s)
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- rollout status deployment/busybox: (3.4206819s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- get pods -o jsonpath='{.items[*].status.podIP}': (1.9537859s)
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9271617s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- nslookup kubernetes.io: (3.4983554s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- nslookup kubernetes.io: (3.2308185s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- nslookup kubernetes.default: (2.1689612s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- nslookup kubernetes.default: (2.1990469s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- nslookup kubernetes.default.svc.cluster.local: (2.0923862s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- nslookup kubernetes.default.svc.cluster.local: (2.0683646s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (25.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (10.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.8963517s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.1317095s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-k47jp -- sh -c "ping -c 1 192.168.65.2": (2.1275577s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.0598757s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220601190158-3412 -- exec busybox-7978565885-sfs8c -- sh -c "ping -c 1 192.168.65.2": (2.0791158s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (10.30s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (118.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220601190158-3412 -v 3 --alsologtostderr
E0601 19:07:41.398143    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20220601190158-3412 -v 3 --alsologtostderr: (1m44.8200797s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr: (13.2156606s)
--- PASS: TestMultiNode/serial/AddNode (118.04s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (6.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.3847385s)
--- PASS: TestMultiNode/serial/ProfileList (6.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (214.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --output json --alsologtostderr: (13.2287972s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp testdata\cp-test.txt multinode-20220601190158-3412:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp testdata\cp-test.txt multinode-20220601190158-3412:/home/docker/cp-test.txt: (6.3645346s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test.txt": (6.2723275s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3463203539\001\cp-test_multinode-20220601190158-3412.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3463203539\001\cp-test_multinode-20220601190158-3412.txt: (6.218881s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test.txt": (6.2687798s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412:/home/docker/cp-test.txt multinode-20220601190158-3412-m02:/home/docker/cp-test_multinode-20220601190158-3412_multinode-20220601190158-3412-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412:/home/docker/cp-test.txt multinode-20220601190158-3412-m02:/home/docker/cp-test_multinode-20220601190158-3412_multinode-20220601190158-3412-m02.txt: (8.4647217s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test.txt": (6.1571071s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412_multinode-20220601190158-3412-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412_multinode-20220601190158-3412-m02.txt": (6.2990322s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412:/home/docker/cp-test.txt multinode-20220601190158-3412-m03:/home/docker/cp-test_multinode-20220601190158-3412_multinode-20220601190158-3412-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412:/home/docker/cp-test.txt multinode-20220601190158-3412-m03:/home/docker/cp-test_multinode-20220601190158-3412_multinode-20220601190158-3412-m03.txt: (8.5174106s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test.txt": (6.1809988s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412_multinode-20220601190158-3412-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412_multinode-20220601190158-3412-m03.txt": (6.2494794s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp testdata\cp-test.txt multinode-20220601190158-3412-m02:/home/docker/cp-test.txt
E0601 19:10:12.047563    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp testdata\cp-test.txt multinode-20220601190158-3412-m02:/home/docker/cp-test.txt: (6.2629701s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test.txt": (6.2635757s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3463203539\001\cp-test_multinode-20220601190158-3412-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3463203539\001\cp-test_multinode-20220601190158-3412-m02.txt: (6.181202s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test.txt": (6.2497162s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m02:/home/docker/cp-test.txt multinode-20220601190158-3412:/home/docker/cp-test_multinode-20220601190158-3412-m02_multinode-20220601190158-3412.txt
E0601 19:10:36.795056    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m02:/home/docker/cp-test.txt multinode-20220601190158-3412:/home/docker/cp-test_multinode-20220601190158-3412-m02_multinode-20220601190158-3412.txt: (8.6177311s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test.txt"
E0601 19:10:44.615610    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test.txt": (6.3213515s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412-m02_multinode-20220601190158-3412.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412-m02_multinode-20220601190158-3412.txt": (6.2709844s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m02:/home/docker/cp-test.txt multinode-20220601190158-3412-m03:/home/docker/cp-test_multinode-20220601190158-3412-m02_multinode-20220601190158-3412-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m02:/home/docker/cp-test.txt multinode-20220601190158-3412-m03:/home/docker/cp-test_multinode-20220601190158-3412-m02_multinode-20220601190158-3412-m03.txt: (8.4877305s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test.txt": (6.242046s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412-m02_multinode-20220601190158-3412-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412-m02_multinode-20220601190158-3412-m03.txt": (6.1821727s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp testdata\cp-test.txt multinode-20220601190158-3412-m03:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp testdata\cp-test.txt multinode-20220601190158-3412-m03:/home/docker/cp-test.txt: (6.2296335s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test.txt": (6.1166557s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3463203539\001\cp-test_multinode-20220601190158-3412-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3463203539\001\cp-test_multinode-20220601190158-3412-m03.txt: (6.2061167s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test.txt": (6.2078922s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m03:/home/docker/cp-test.txt multinode-20220601190158-3412:/home/docker/cp-test_multinode-20220601190158-3412-m03_multinode-20220601190158-3412.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m03:/home/docker/cp-test.txt multinode-20220601190158-3412:/home/docker/cp-test_multinode-20220601190158-3412-m03_multinode-20220601190158-3412.txt: (8.5412903s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test.txt": (6.2248825s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412-m03_multinode-20220601190158-3412.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412-m03_multinode-20220601190158-3412.txt": (6.2665054s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m03:/home/docker/cp-test.txt multinode-20220601190158-3412-m02:/home/docker/cp-test_multinode-20220601190158-3412-m03_multinode-20220601190158-3412-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 cp multinode-20220601190158-3412-m03:/home/docker/cp-test.txt multinode-20220601190158-3412-m02:/home/docker/cp-test_multinode-20220601190158-3412-m03_multinode-20220601190158-3412-m02.txt: (8.5442538s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m03 "sudo cat /home/docker/cp-test.txt": (6.1919835s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412-m03_multinode-20220601190158-3412-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 ssh -n multinode-20220601190158-3412-m02 "sudo cat /home/docker/cp-test_multinode-20220601190158-3412-m03_multinode-20220601190158-3412-m02.txt": (6.1928017s)
--- PASS: TestMultiNode/serial/CopyFile (214.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (28.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 node stop m03: (7.353388s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status
E0601 19:12:41.425218    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status: exit status 7 (10.7937936s)

                                                
                                                
-- stdout --
	multinode-20220601190158-3412
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601190158-3412-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601190158-3412-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr: exit status 7 (10.7763999s)

                                                
                                                
-- stdout --
	multinode-20220601190158-3412
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601190158-3412-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601190158-3412-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 19:12:41.870106   10148 out.go:296] Setting OutFile to fd 416 ...
	I0601 19:12:41.929272   10148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:12:41.929272   10148 out.go:309] Setting ErrFile to fd 904...
	I0601 19:12:41.929272   10148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:12:41.940675   10148 out.go:303] Setting JSON to false
	I0601 19:12:41.940675   10148 mustload.go:65] Loading cluster: multinode-20220601190158-3412
	I0601 19:12:41.941777   10148 config.go:178] Loaded profile config "multinode-20220601190158-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:12:41.941777   10148 status.go:253] checking status of multinode-20220601190158-3412 ...
	I0601 19:12:41.960085   10148 cli_runner.go:164] Run: docker container inspect multinode-20220601190158-3412 --format={{.State.Status}}
	I0601 19:12:44.468088   10148 cli_runner.go:217] Completed: docker container inspect multinode-20220601190158-3412 --format={{.State.Status}}: (2.5078725s)
	I0601 19:12:44.468088   10148 status.go:328] multinode-20220601190158-3412 host status = "Running" (err=<nil>)
	I0601 19:12:44.468088   10148 host.go:66] Checking if "multinode-20220601190158-3412" exists ...
	I0601 19:12:44.475560   10148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601190158-3412
	I0601 19:12:45.511775   10148 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601190158-3412: (1.0361613s)
	I0601 19:12:45.511775   10148 host.go:66] Checking if "multinode-20220601190158-3412" exists ...
	I0601 19:12:45.521588   10148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 19:12:45.528496   10148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601190158-3412
	I0601 19:12:46.562791   10148 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601190158-3412: (1.0342416s)
	I0601 19:12:46.563018   10148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59545 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-20220601190158-3412\id_rsa Username:docker}
	I0601 19:12:46.691623   10148 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1699741s)
	I0601 19:12:46.701573   10148 ssh_runner.go:195] Run: systemctl --version
	I0601 19:12:46.733103   10148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 19:12:46.776227   10148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220601190158-3412
	I0601 19:12:47.806899   10148 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220601190158-3412: (1.030618s)
	I0601 19:12:47.808259   10148 kubeconfig.go:92] found "multinode-20220601190158-3412" server: "https://127.0.0.1:59549"
	I0601 19:12:47.808325   10148 api_server.go:165] Checking apiserver status ...
	I0601 19:12:47.820003   10148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 19:12:47.859952   10148 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1742/cgroup
	I0601 19:12:47.885744   10148 api_server.go:181] apiserver freezer: "20:freezer:/docker/bd4eee57084b716d5b5213f0bb8754158fdb596266f606a977af70eb6b45b6bb/kubepods/burstable/podb7f8fabbc51b0b6ff628e4292d6de673/cbd74243040167daaf717aae35b482cde2c6f524cbd7b2d836b7e5a59ad4ae5b"
	I0601 19:12:47.896358   10148 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bd4eee57084b716d5b5213f0bb8754158fdb596266f606a977af70eb6b45b6bb/kubepods/burstable/podb7f8fabbc51b0b6ff628e4292d6de673/cbd74243040167daaf717aae35b482cde2c6f524cbd7b2d836b7e5a59ad4ae5b/freezer.state
	I0601 19:12:47.921999   10148 api_server.go:203] freezer state: "THAWED"
	I0601 19:12:47.922093   10148 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59549/healthz ...
	I0601 19:12:47.938183   10148 api_server.go:266] https://127.0.0.1:59549/healthz returned 200:
	ok
	I0601 19:12:47.939179   10148 status.go:419] multinode-20220601190158-3412 apiserver status = Running (err=<nil>)
	I0601 19:12:47.939179   10148 status.go:255] multinode-20220601190158-3412 status: &{Name:multinode-20220601190158-3412 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 19:12:47.939179   10148 status.go:253] checking status of multinode-20220601190158-3412-m02 ...
	I0601 19:12:47.953452   10148 cli_runner.go:164] Run: docker container inspect multinode-20220601190158-3412-m02 --format={{.State.Status}}
	I0601 19:12:48.987720   10148 cli_runner.go:217] Completed: docker container inspect multinode-20220601190158-3412-m02 --format={{.State.Status}}: (1.0341453s)
	I0601 19:12:48.987720   10148 status.go:328] multinode-20220601190158-3412-m02 host status = "Running" (err=<nil>)
	I0601 19:12:48.987720   10148 host.go:66] Checking if "multinode-20220601190158-3412-m02" exists ...
	I0601 19:12:48.994318   10148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601190158-3412-m02
	I0601 19:12:50.050183   10148 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601190158-3412-m02: (1.0558105s)
	I0601 19:12:50.050183   10148 host.go:66] Checking if "multinode-20220601190158-3412-m02" exists ...
	I0601 19:12:50.060483   10148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 19:12:50.067479   10148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601190158-3412-m02
	I0601 19:12:51.130044   10148 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601190158-3412-m02: (1.0625098s)
	I0601 19:12:51.130044   10148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59603 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-20220601190158-3412-m02\id_rsa Username:docker}
	I0601 19:12:51.259943   10148 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1992659s)
	I0601 19:12:51.269400   10148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 19:12:51.304409   10148 status.go:255] multinode-20220601190158-3412-m02 status: &{Name:multinode-20220601190158-3412-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0601 19:12:51.304409   10148 status.go:253] checking status of multinode-20220601190158-3412-m03 ...
	I0601 19:12:51.320294   10148 cli_runner.go:164] Run: docker container inspect multinode-20220601190158-3412-m03 --format={{.State.Status}}
	I0601 19:12:52.387129   10148 cli_runner.go:217] Completed: docker container inspect multinode-20220601190158-3412-m03 --format={{.State.Status}}: (1.06678s)
	I0601 19:12:52.387129   10148 status.go:328] multinode-20220601190158-3412-m03 host status = "Stopped" (err=<nil>)
	I0601 19:12:52.387129   10148 status.go:341] host is not running, skipping remaining checks
	I0601 19:12:52.387129   10148 status.go:255] multinode-20220601190158-3412-m03 status: &{Name:multinode-20220601190158-3412-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (28.92s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (60.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:242: (dbg) Done: docker version -f {{.Server.Version}}: (1.1143311s)
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 node start m03 --alsologtostderr: (45.9040306s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status: (13.0667504s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (60.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (185.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220601190158-3412
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220601190158-3412
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20220601190158-3412: (38.1417424s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412 --wait=true -v=8 --alsologtostderr
E0601 19:15:12.065118    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 19:15:36.823252    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412 --wait=true -v=8 --alsologtostderr: (2m26.7594338s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220601190158-3412
--- PASS: TestMultiNode/serial/RestartKeepsNodes (185.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (43.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 node delete m03: (31.8489423s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr: (9.7935839s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:412: (dbg) Done: docker volume ls: (1.0329473s)
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
E0601 19:17:41.440036    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
--- PASS: TestMultiNode/serial/DeleteNode (43.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 stop
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 stop: (32.7215674s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status
E0601 19:18:15.335161    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status: exit status 7 (3.8643505s)

                                                
                                                
-- stdout --
	multinode-20220601190158-3412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601190158-3412-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr: exit status 7 (3.8925286s)

                                                
                                                
-- stdout --
	multinode-20220601190158-3412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601190158-3412-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 19:18:18.493396    6592 out.go:296] Setting OutFile to fd 688 ...
	I0601 19:18:18.549393    6592 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:18:18.549393    6592 out.go:309] Setting ErrFile to fd 876...
	I0601 19:18:18.549393    6592 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 19:18:18.561392    6592 out.go:303] Setting JSON to false
	I0601 19:18:18.561392    6592 mustload.go:65] Loading cluster: multinode-20220601190158-3412
	I0601 19:18:18.562412    6592 config.go:178] Loaded profile config "multinode-20220601190158-3412": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 19:18:18.562412    6592 status.go:253] checking status of multinode-20220601190158-3412 ...
	I0601 19:18:18.577398    6592 cli_runner.go:164] Run: docker container inspect multinode-20220601190158-3412 --format={{.State.Status}}
	I0601 19:18:21.072021    6592 cli_runner.go:217] Completed: docker container inspect multinode-20220601190158-3412 --format={{.State.Status}}: (2.494493s)
	I0601 19:18:21.072021    6592 status.go:328] multinode-20220601190158-3412 host status = "Stopped" (err=<nil>)
	I0601 19:18:21.072021    6592 status.go:341] host is not running, skipping remaining checks
	I0601 19:18:21.072021    6592 status.go:255] multinode-20220601190158-3412 status: &{Name:multinode-20220601190158-3412 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 19:18:21.072021    6592 status.go:253] checking status of multinode-20220601190158-3412-m02 ...
	I0601 19:18:21.088900    6592 cli_runner.go:164] Run: docker container inspect multinode-20220601190158-3412-m02 --format={{.State.Status}}
	I0601 19:18:22.128064    6592 cli_runner.go:217] Completed: docker container inspect multinode-20220601190158-3412-m02 --format={{.State.Status}}: (1.0391106s)
	I0601 19:18:22.128064    6592 status.go:328] multinode-20220601190158-3412-m02 host status = "Stopped" (err=<nil>)
	I0601 19:18:22.128064    6592 status.go:341] host is not running, skipping remaining checks
	I0601 19:18:22.128064    6592 status.go:255] multinode-20220601190158-3412-m02 status: &{Name:multinode-20220601190158-3412-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (122.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:342: (dbg) Done: docker version -f {{.Server.Version}}: (1.1183219s)
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412 --wait=true -v=8 --alsologtostderr --driver=docker
E0601 19:18:40.020599    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 19:20:12.072316    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412 --wait=true -v=8 --alsologtostderr --driver=docker: (1m50.6061438s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220601190158-3412 status --alsologtostderr: (9.6808613s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (122.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (143.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220601190158-3412
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412-m02 --driver=docker: exit status 14 (407.9498ms)

                                                
                                                
-- stdout --
	* [multinode-20220601190158-3412-m02] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220601190158-3412-m02' is duplicated with machine name 'multinode-20220601190158-3412-m02' in profile 'multinode-20220601190158-3412'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412-m03 --driver=docker
E0601 19:20:36.839194    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220601190158-3412-m03 --driver=docker: (1m56.1692918s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220601190158-3412
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220601190158-3412: exit status 80 (5.3746208s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220601190158-3412
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220601190158-3412-m03 already exists in multinode-20220601190158-3412-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_4.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220601190158-3412-m03
E0601 19:22:41.453401    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220601190158-3412-m03: (20.9625079s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (143.23s)

                                                
                                    
x
+
TestPreload (344.3s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220601192324-3412 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0601 19:25:12.088751    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 19:25:36.844714    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
preload_test.go:48: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220601192324-3412 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (2m48.8687392s)
preload_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220601192324-3412 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220601192324-3412 -- docker pull gcr.io/k8s-minikube/busybox: (7.3761465s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220601192324-3412 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
E0601 19:27:24.668513    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
E0601 19:27:41.469118    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220601192324-3412 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (2m18.8107403s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220601192324-3412 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220601192324-3412 -- docker images: (6.2085747s)
helpers_test.go:175: Cleaning up "test-preload-20220601192324-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220601192324-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220601192324-3412: (23.0304722s)
--- PASS: TestPreload (344.30s)

                                                
                                    
x
+
TestScheduledStopWindows (217.7s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220601192908-3412 --memory=2048 --driver=docker
E0601 19:30:12.104544    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 19:30:36.869120    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20220601192908-3412 --memory=2048 --driver=docker: (1m49.3679534s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220601192908-3412 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220601192908-3412 --schedule 5m: (6.0457438s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220601192908-3412 -n scheduled-stop-20220601192908-3412
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220601192908-3412 -n scheduled-stop-20220601192908-3412: (6.5059128s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220601192908-3412 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220601192908-3412 -- sudo systemctl show minikube-scheduled-stop --no-page: (6.2256553s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220601192908-3412 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220601192908-3412 --schedule 5s: (4.6490311s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20220601192908-3412
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20220601192908-3412: exit status 7 (2.8391911s)

                                                
                                                
-- stdout --
	scheduled-stop-20220601192908-3412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220601192908-3412 -n scheduled-stop-20220601192908-3412
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220601192908-3412 -n scheduled-stop-20220601192908-3412: exit status 7 (2.8305828s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220601192908-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220601192908-3412
E0601 19:32:41.483199    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220601192908-3412: (19.2196339s)
--- PASS: TestScheduledStopWindows (217.70s)

                                                
                                    
x
+
TestInsufficientStorage (108.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220601193246-3412 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220601193246-3412 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (1m16.1801505s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"baa5ab2a-4a8d-424e-b096-2d55e236e564","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220601193246-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4917e69-1889-490f-a918-e9fa5f7af40e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"8627503c-3a76-4970-a70b-7576da1d1a5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2748f18c-48dd-4604-b513-ddaec3ff80d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"4a40237e-dc7e-42df-ba2c-5ded52686373","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f1ad4c6c-df21-41a1-ac02-be3f857ab185","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"36038c22-efff-4714-b7fa-a151475c6658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6be97f89-d11d-4ab6-8794-c1ab9d0a658b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc0e49cf-759a-4872-af98-1ad03fbb99a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"5af8b674-9092-4ea6-8b01-2cb491acc059","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220601193246-3412 in cluster insufficient-storage-20220601193246-3412","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2fb5ca9d-3ca5-416d-a555-f9bea146ba1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"74534b1d-b09b-4330-917c-a814e1412fe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6aa8336e-2a63-4b4e-87d6-91a3d0bb5f2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220601193246-3412 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220601193246-3412 --output=json --layout=cluster: exit status 7 (6.2810824s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601193246-3412","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220601193246-3412","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 19:34:08.852704    9660 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220601193246-3412" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220601193246-3412 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220601193246-3412 --output=json --layout=cluster: exit status 7 (6.2699339s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601193246-3412","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220601193246-3412","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 19:34:15.138000    3264 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220601193246-3412" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E0601 19:34:15.174684    3264 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-20220601193246-3412\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220601193246-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220601193246-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220601193246-3412: (19.6778911s)
--- PASS: TestInsufficientStorage (108.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (343.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.2425942093.exe start -p running-upgrade-20220601194733-3412 --memory=2200 --vm-driver=docker
E0601 19:47:41.521377    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.2425942093.exe start -p running-upgrade-20220601194733-3412 --memory=2200 --vm-driver=docker: (3m36.0911327s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20220601194733-3412 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0601 19:51:35.443364    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 19:52:00.142158    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20220601194733-3412 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m39.2876899s)
helpers_test.go:175: Cleaning up "running-upgrade-20220601194733-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220601194733-3412

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220601194733-3412: (27.2445139s)
--- PASS: TestRunningBinaryUpgrade (343.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (326.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601194404-3412 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker
E0601 19:44:04.722185    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601194404-3412 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (2m15.1779812s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220601194404-3412
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220601194404-3412: (9.6288274s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220601194404-3412 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220601194404-3412 status --format={{.Host}}: exit status 7 (3.0635162s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601194404-3412 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601194404-3412 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker: (1m49.7007625s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601194404-3412 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601194404-3412 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601194404-3412 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (422.1442ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220601194404-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220601194404-3412
	    minikube start -p kubernetes-upgrade-20220601194404-3412 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601194404-34122 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601194404-3412 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601194404-3412 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601194404-3412 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker: (44.3369054s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220601194404-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220601194404-3412

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220601194404-3412: (23.4312145s)
--- PASS: TestKubernetesUpgrade (326.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (428.24s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.1.112605205.exe start -p missing-upgrade-20220601194025-3412 --memory=2200 --driver=docker
E0601 19:40:36.892877    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
E0601 19:42:41.513122    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.1.112605205.exe start -p missing-upgrade-20220601194025-3412 --memory=2200 --driver=docker: (4m2.9586694s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220601194025-3412
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220601194025-3412: (10.4784104s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220601194025-3412
version_upgrade_test.go:330: (dbg) Done: docker rm missing-upgrade-20220601194025-3412: (1.218919s)
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20220601194025-3412 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0601 19:45:12.155270    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20220601194025-3412 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m25.3217303s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220601194025-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220601194025-3412

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220601194025-3412: (27.7472798s)
--- PASS: TestMissingContainerUpgrade (428.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220601193434-3412 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220601193434-3412 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (458.4256ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601193434-3412] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (143.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220601193434-3412 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220601193434-3412 --driver=docker: (2m15.8706066s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220601193434-3412 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-20220601193434-3412 status -o json: (8.0682211s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (143.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (65.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220601193434-3412 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220601193434-3412 --no-kubernetes --driver=docker: (38.5308141s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220601193434-3412 status -o json
E0601 19:37:41.504533    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-20220601193434-3412 status -o json: exit status 2 (6.4557582s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220601193434-3412","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-20220601193434-3412

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-20220601193434-3412: (20.0688437s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (65.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (58.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220601193434-3412 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220601193434-3412 --no-kubernetes --driver=docker: (58.401206s)
--- PASS: TestNoKubernetes/serial/Start (58.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (6.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-20220601193434-3412 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-20220601193434-3412 "sudo systemctl is-active --quiet service kubelet": exit status 1 (6.157732s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (6.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (23.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (11.6776339s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (11.6941015s)
--- PASS: TestNoKubernetes/serial/ProfileList (23.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (426.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.3937853176.exe start -p stopped-upgrade-20220601194002-3412 --memory=2200 --vm-driver=docker
E0601 19:40:12.131556    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.3937853176.exe start -p stopped-upgrade-20220601194002-3412 --memory=2200 --vm-driver=docker: (5m32.2234115s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.3937853176.exe -p stopped-upgrade-20220601194002-3412 stop
E0601 19:45:36.906553    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601183740-3412\client.crt: The system cannot find the path specified.
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.3937853176.exe -p stopped-upgrade-20220601194002-3412 stop: (22.7169193s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20220601194002-3412 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20220601194002-3412 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m11.3914579s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (426.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220601194002-3412
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220601194002-3412: (10.9192842s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.92s)

                                                
                                    
x
+
TestPause/serial/Start (155.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220601194928-3412 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220601194928-3412 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m35.482572s)
--- PASS: TestPause/serial/Start (155.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (167.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220601193434-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-20220601193434-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (2m47.8327199s)
--- PASS: TestNetworkPlugins/group/auto/Start (167.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220601194928-3412 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220601194928-3412 --alsologtostderr -v=1 --driver=docker: (40.7820366s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (7.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-20220601193434-3412 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-20220601193434-3412 "pgrep -a kubelet": (7.4299599s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (7.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (19.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220601193434-3412 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-gps58" [48700182-b3f9-4b1f-a12b-23526f1df104] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-gps58" [48700182-b3f9-4b1f-a12b-23526f1df104] Running
E0601 19:52:41.550843    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220601175654-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 19.0308852s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (19.90s)

                                                
                                    
x
+
TestPause/serial/Pause (7.29s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220601194928-3412 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Pause
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220601194928-3412 --alsologtostderr -v=5: (7.2936633s)
--- PASS: TestPause/serial/Pause (7.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220601193434-3412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220601193434-3412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220601193434-3412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5988019s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (7.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20220601194928-3412 --output=json --layout=cluster

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20220601194928-3412 --output=json --layout=cluster: exit status 2 (7.4900959s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220601194928-3412","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220601194928-3412","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (7.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (11.44s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20220601194928-3412 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20220601194928-3412 --alsologtostderr -v=5: (11.4427301s)
--- PASS: TestPause/serial/Unpause (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (397.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220601193442-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p false-20220601193442-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (6m37.4035136s)
--- PASS: TestNetworkPlugins/group/false/Start (397.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-20220601193442-3412 "pgrep -a kubelet"
E0601 20:00:10.653765    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220601193434-3412\client.crt: The system cannot find the path specified.
E0601 20:00:12.199405    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-20220601193442-3412 "pgrep -a kubelet": (6.9996699s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (7.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (21.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220601193442-3412 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-vgjnz" [5a355289-bb67-499a-b012-e2b75b5bac1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-vgjnz" [5a355289-bb67-499a-b012-e2b75b5bac1b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 20.0879052s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (21.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (389.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220601193434-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-20220601193434-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (6m29.567572s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (389.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (136.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220601193434-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-20220601193434-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (2m16.7690121s)
--- PASS: TestNetworkPlugins/group/bridge/Start (136.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (386.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220601193434-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-20220601193434-3412 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (6m26.9682186s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (386.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (6.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-20220601193434-3412 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-20220601193434-3412 "pgrep -a kubelet": (6.6736375s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (6.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (24.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220601193434-3412 replace --force -f testdata\netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-bhd9v" [d04c5813-e498-4143-84d5-cb9a76e92d0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-668db85669-bhd9v" [d04c5813-e498-4143-84d5-cb9a76e92d0f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 24.0306429s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (24.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220601193434-3412 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220601193434-3412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220601193434-3412 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (7.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220601193434-3412 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220601193434-3412 "pgrep -a kubelet": (7.2777723s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (7.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (28.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220601193434-3412 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-6dxgw" [a6e37fb3-3b6b-41ac-85b2-529d1ce28915] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-668db85669-6dxgw" [a6e37fb3-3b6b-41ac-85b2-529d1ce28915] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 28.029701s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (28.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (6.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-20220601193434-3412 "pgrep -a kubelet"
E0601 20:10:54.600254    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220601193442-3412\client.crt: The system cannot find the path specified.
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-20220601193434-3412 "pgrep -a kubelet": (6.5037044s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (6.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (19.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220601193434-3412 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-t6ndg" [79743807-845a-4c74-9dc3-6205e858932f] Pending
helpers_test.go:342: "netcat-668db85669-t6ndg" [79743807-845a-4c74-9dc3-6205e858932f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0601 20:11:04.462671    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-20220601193434-3412\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-668db85669-t6ndg" [79743807-845a-4c74-9dc3-6205e858932f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 19.0976381s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (19.97s)

                                                
                                    

Test skip (24/213)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (28.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 24.9894ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-m26bq" [e37d1a7d-6996-4b4c-bb66-46d143c82558] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0377661s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-56fst" [a9fd0105-ba62-4bc7-a02b-117363b2b40d] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0442654s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220601174345-3412 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220601174345-3412 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220601174345-3412 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (17.5944012s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (28.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220601174345-3412 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220601174345-3412 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:182: (dbg) Done: kubectl --context addons-20220601174345-3412 replace --force -f testdata\nginx-ingress-v1.yaml: (1.3615111s)
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220601174345-3412 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [160eff83-090d-4b14-acea-dc997452fe58] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [160eff83-090d-4b14-acea-dc997452fe58] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 20.0749404s
addons_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220601174345-3412 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:212: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220601174345-3412 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.5994045s)
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (29.16s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220601175654-3412 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:908: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220601175654-3412 --alsologtostderr -v=1] ...
helpers_test.go:488: unable to find parent, assuming dead: process does not exist
E0601 18:10:11.863956    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:11:35.080441    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:15:11.865203    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:20:11.881861    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:25:11.908022    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:28:15.145976    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:30:11.918170    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
E0601 18:35:11.931719    3412 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220601174345-3412\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220601175654-3412 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220601175654-3412 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-kpkgl" [0e1e3161-1a77-4441-ac3d-fe080e97cb9b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-kpkgl" [0e1e3161-1a77-4441-ac3d-fe080e97cb9b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.043214s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (19.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (44.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601183740-3412 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220601183740-3412 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (5.0817907s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601183740-3412 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:182: (dbg) Done: kubectl --context ingress-addon-legacy-20220601183740-3412 replace --force -f testdata\nginx-ingress-v1beta1.yaml: (1.5074772s)
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601183740-3412 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:195: (dbg) Done: kubectl --context ingress-addon-legacy-20220601183740-3412 replace --force -f testdata\nginx-pod-svc.yaml: (1.3194073s)
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [65c84dd6-9301-4e56-8c0d-7a9813cc9493] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [65c84dd6-9301-4e56-8c0d-7a9813cc9493] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 30.1670402s
addons_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601183740-3412 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:212: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601183740-3412 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.1841678s)
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (44.38s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (7.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220601193434-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220601193434-3412

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220601193434-3412: (7.1520523s)
--- SKIP: TestNetworkPlugins/group/flannel (7.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (9.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220601193442-3412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-20220601193442-3412
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-20220601193442-3412: (9.5329919s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (9.53s)

                                                
                                    
Copied to clipboard